diff --git a/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/2301.13441v1.pdf.txt b/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/2301.13441v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..64fec05e20770cb4584524b11d3e889e1a331c11 --- /dev/null +++ b/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/2301.13441v1.pdf.txt @@ -0,0 +1,1770 @@ +CMLCompiler: A Unified Compiler for Classical Machine +Learning +Xu Wen +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +wenxu@ict.ac.cn +Wanling Gao +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +gaowanling@ict.ac.cn +Anzheng Li +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +lianzheng20g@ict.ac.cn +Lei Wang +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +wanglei_2011@ict.ac.cn +Zihan Jiang +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +jiangzihan@ict.ac.cn +Jianfeng Zhan∗ +Institute of Computing Technology, +Chinese Academy of Sciences +University of Chinese Academy of +Sciences +zhanjianfeng@ict.ac.cn +ABSTRACT +Classical machine learning (CML) occupies nearly half of machine +learning pipelines in production applications. Unfortunately, it fails +to utilize the state-of-the-practice devices fully and performs poorly. +Without a unified framework, the hybrid deployments of deep learn- +ing (DL) and CML also suffer from severe performance and porta- +bility issues. This paper presents the design of a unified compiler, +called CMLCompiler, for CML inference. We propose two unified +abstractions: operator representations and extended computational +graphs. The CMLCompiler framework performs the conversion and +graph optimization based on two unified abstractions, then outputs +an optimized computational graph to DL compilers or frameworks. +We implement CMLCompiler on TVM. The evaluation shows CML- +Compiler’s portability and superior performance. It achieves up to +4.38× speedup on CPU, 3.31× speedup on GPU, and 5.09× speedup +on IoT devices, compared to the state-of-the-art solutions — scikit- +learn, intel sklearn, and hummingbird. Our performance of CML +and DL mixed pipelines achieves up to 3.04x speedup compared +with cross-framework implementations. +CCS CONCEPTS +• Computing methodologies → Machine learning; • Computer +systems organization → Real-time systems. +KEYWORDS +Classical Machine Learning, Deep Learning, Compiler +1 +INTRODUCTION +Deep learning (DL) and classical machine learning (CML), collec- +tively called machine learning (ML), have played an increasingly +critical role in recent years. DL refers to those neural network mod- +els, such as convolutional neural networks (CNNs) [24], recurrent +neural networks (RNNs) [28], and generative adversarial networks +(GANs) [16]. Different from DL, CML represents a set of non-neural +network models in ML, e.g., linear models [37], decision trees [26], +∗Corresponding author. +Hardware +CPU +GPU +IoT +... +Models +Linear Models +Trees +SVMs +... +Compiler Framework +Unified Abstractions +CMLCompiler +DL Frameworks (PyTorch) +PyTorch Runtime +DL compilers (TVM) +TVM Runtime +DL Frameworks (PyTorch) +PyTorch Runtime +DL compilers (TVM) +TVM Runtime +Figure 1: The CMLCompiler design. Our contributions are +highlighted in green color. +random forests [4], and support vector machines [42]. DL stands +out because of its accuracy, while CML is still widely used for lower +time and energy costs. Doris Xin et al. [47] analyze 3000 produc- +tion ML pipelines at Google and find that 40% of them use CML +models. Besides, many real-world applications adopt hybrid de- +ployments of CML and DL [2] to guarantee high accuracy and low +latency [25, 27, 36, 38], e.g., DL models for feature embedding and +CML models for classification or regression. +DL compilers, like TVM [7, 10, 23], provide a structural approach +to tackle the portability issue and facilitates wide deployment of DL +models on a broad spectrum of devices like GPUs, FPGAs, and IoT +devices and guarantees an appreciable performance. DL compilers +use computational graphs as high-level abstractions, supporting +a large variety of DL models. Meanwhile, DL compilers propose +low-level abstractions such as tensor representation to generate +executable code. For newborn hardware, the vendor just need to +provide hardware primitives, instead of a sophisticated high per- +formance library that is prohibitively costly. Based on the tensor +1 +arXiv:2301.13441v1 [cs.LG] 31 Jan 2023 + +Xu Wen et al. +representation and computational graphs abstractions, many opti- +mizations [8, 22, 49] are proposed to boost performance, e.g., they +provide sophisticated support for CPU processor architectures as +the latter has different architectures, diverse core numbers, ex- +tended instructions, and cache sizes. +However, despite its popularity and importance, CML suffers +from severe portability and performance issues. State-of-the-practice +and state-of-the-art CML frameworks [17, 29, 32] provide ad-hoc +solutions, implementing each CML model on every hardware device +case by case due to the lack of unified abstractions. These ad-hoc +solutions raise considerable difficulties in developing a general- +purpose framework and optimization techniques to achieve optimal +performance for every model. They either lack the support or only +partially support various hardware devices, such as GPUs, FPGAs, +and IoT devices. In addition, adding support for a model on a new +hardware device needs great effort, more than several thousands +of lines of codes [13], let alone hundreds or thousands of models +and devices. Moreover, they also face performance issues. Even on +the CPUs – the most popular CML platform, the performance is +unsatisfactory due to the lack of specific optimizations for advanced +characteristics like multi-cores and SIMD. The hybrid deployment +of CML and DL models faces more severe problems. +Our intuition is to enable CML to leverage DL’s well-defined +unified abstractions and highly mature compilers, optimization +technologies, and frameworks. Unfortunately, it is not a trivial task. +There are significant distinctions in operators and models between +CML and DL. DL operators focus on tensors, while CML handles ar- +rays, matrices, scalars, and tables. DL models are all neural network +models, while CML models, such as decision trees and SVMs, can +hardly be represented as neural networks. Most DL models are ex- +pressible as flat sequences of operations without if-statements [35], +but if-statements frequently occur in CML models. Existing DL ab- +stractions, such as tensor representation and computational graphs, +can not directly represent CML operators and models. Those dis- +tinctions determine CML can hardly leverage the DL ecosystems +directly. Several efforts attempt to support CML models on DL +frameworks, e.g., TensorFlow [1] provides a CPU-based decision +forest library TF-DF [43]. However, these attempts do not solve +the generality and portability issue. They only support a narrower +range of models, lacking support for GPUs and IoT devices. +This paper focuses on CML inference for the first step, consid- +ering its great significance that occupies nearly half of the total +cost [2] and its wide applications in online serving, Internet of +things (IoT), etc [18, 46]. We will extend our work to CML training +in the near future. As illustrated in Fig. 1, we propose a unified +compiler, CMLCompiler, for CML inference, which enables CML to +leverage the mature DL ecosystems. At the core of CMLCompiler +are two unified abstractions: operator representations and extended +computational graphs (ECGs) and a compiler framework. Operator +representations convert CML operators into tensor formats, while +an ECG organizes these converted operators in an optimization- +friendly way. The two unified abstractions define how to convert +and translate CML models into DL computational graphs, which +can be recognized and executed by DL frameworks and compilers. +The CMLCompiler framework consists of four modules – opera- +tor converter, model parser, graph optimizer, and graph translator. +The CMLCompiler framework performs the conversion and graph +optimization based on two unified abstractions, then outputs an +optimized DL computational graph to DL compilers or frameworks. +CMLCompiler can also optimize the mixed pipelines of CML and DL. +As TVM provides portability and sophisticated optimizations, we +choose to implement CMLCompiler on TVM. Currently, it supports +up to 35 CML models. +This paper makes the following contributions: +• We propose two unified abstractions – operator represen- +tations and extended computational graphs– to represent +CML operators and models. +• We present the design of CMLCompiler, a unified compiler +for CML inference, based on these abstractions. The CML- +Compiler framework performs the conversion and graph +optimization based on two unified abstractions, then outputs +an optimized DL computational graph to DL compilers or +frameworks. +• CMLCompiler enables the hybrid deployment of CML and +DL with a unified framework. +• We implement CMLCompiler on top of TVM, achieving up +to 4.38x speedup on CPU, 3.31x speedup on GPU, and 5.09x +speedup on IoT devices, compared to the state-of-the-art +solutions — scikit-learn, intel sklearn, and hummingbird. Our +support for CML and DL mixed pipelines achieves up to 3.04x +speedup compared with cross-framework implementations. +The remainder of the paper is organized as follows. Section 2 +introduces the motivation. Section 3 introduces unified abstractions. +Section 4 shows design and implementation. Section 5 presents our +evaluation. Section 6 illustrates the related work. Finally, we draw +a conclusion in Section 7. +2 +MOTIVATION +CML faces severe portability and performance issues. Fig. 2 com- +pares the performance of sklearn, the most widely used CML frame- +work on GitHub [33]— against CMLCompiler leveraging DL com- +pilers. We find that sklearn can not support GPUs and only supports +IoT devices partially. Adding support for a new hardware device +needs great effort due to the ad-hoc implementations. For exam- +ple, adding support for random forest on GPU needs 2.7k lines +of code [13]. Many models and hardware devices need to be sup- +ported, requiring hundreds or thousands of more effort. Moreover, +due to the lack of compilation support for CPU’s features, sklearn +has poor performance. As shown in Fig .2, CMLCompiler achieves +2.3x speedup by utilizing AVX2 through compilation compared +with sklearn. Other CML frameworks such as Spark MLlib [29] and +H2O [17] face the same problems. Our solution is to propose uni- +fied abstractions to utilize DL compilers and frameworks, achieving +portability and high performance. +CML and DL models are often deployed hybrid in NLP [36], in- +telligent healthcare [38], recommendation systems [25], etc., espe- +cially in the scenarios with limited computational power and small +datasets. Many of them are deployed on heterogeneous hardware +devices for online serving. As there is no unified system, different +frameworks are deployed with three disadvantages. First, this lim- +its the portability. If one framework fails on the target device, the +whole pipeline corrupts. Second, there are extra costs due to data +2 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +��� +��� +��� +������������ +�� +� +�� +� +�������� +������� +�������� +��� +��� +��� +���������������������� +�� +� +�� +� +������� +�������� +Figure 2: This figure compares the performance of sklearn, +the most widely used CML framework on GitHub [33]— +against CMLCompiler. Our evaluation shows that sklearn +suffers from both performance and portability issues for a +lack of unified abstractions. +conversions across frameworks. Third, it is hard to make optimiza- +tions across different frameworks. Using a unified framework can +overcome these disadvantages, so we add the support for hybrid +deployment of CML and DL in CMLCompiler. +3 +THE UNIFIED ABSTRACTIONS +CMLCompiler takes CML models as input and returns DL compu- +tational graphs as output, utilizing DL frameworks or compilers +to compile and deploy them. At the core of CMLCompiler are two +unified abstractions. Operator representations are used to represent +CML operators in tensor format, as shown in Section 3.1. Extend +computational graph (ECG) organizes operator representations in +an optimization-friendly way and can be used to represent CML +models, as shown in Section 3.2. Section 3.3 shows the supported +algorithms and extensions for other algorithms. +3.1 +Operator Representation +An operator representation uses a combination of one or more DL +operators with tensors as input and output to represent a CML oper- +ator. We convert CML operators into DL operators and wrap them +in the format of operator representations. Data in CML has mainly +four formats: arrays, matrices, scalars, and tables [44]. Matrices +and arrays are regarded as two types of tensors whose operators +can naturally be converted into DL operators. When CML models +deal with tables, they take numeric data from tables and operate +it, which can also be regarded as scalars. Hereby, we focus on the +operators on scalars. +3.1.1 +Operator categories and corresponding representations. As +shown in Table 1, we classify CML operators into six categories +and provide operator representations, respectively. +(1) Assignment operators assign values to variables. If we assign +n values 𝑣1, 𝑣2, ..., 𝑣𝑛 to n variables 𝑥1, 𝑥2, ..., 𝑥𝑛, we organize +these variables and values in two tensors 𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛] and +𝑉 = [𝑣1, 𝑣2, ..., 𝑣𝑛]. Then we assign tensor V to tensor X to replace +n scalar assignments. Tensor assignments benefit memory copy +which stores data in block. +(2) Swap operators swap two or more variables. These variables +can be represented in a tensor format and use reorganization oper- +ators such as 𝑟𝑒𝑠ℎ𝑎𝑝𝑒 to swap the elements. +(3) Basic arithmetic operators refers to those arithmetic calcu- +lations based on scalars, such as 𝑎𝑑𝑑, 𝑠𝑢𝑏, 𝑚𝑢𝑙 and 𝑑𝑖𝑣. We use +element-wise arithmetic operators based on tensors to replace them, +which can utilize SIMD instructions better. +(4) Aggregation operators refer to operators that calculate ag- +gregates among many scalars, such as 𝑚𝑖𝑛, 𝑚𝑎𝑥, 𝑠𝑢𝑚, and 𝑎𝑣𝑔. +Reduction operators can be used to accomplish that. +(5) Comparison operators make a comparison between scalars +and return True or False, such as 𝑙𝑒𝑠𝑠, 𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟. Compar- +isons with the same operator can be represented in a tensor format +and use an element-wise comparison to replace. +(6) Conditional operators are used to represent if-else statements, +in the form of𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3, where𝑒𝑥𝑝𝑟1 is a compar- +ison operator. If 𝑒𝑥𝑝𝑟2 and 𝑒𝑥𝑝𝑟3 are all assignment or arithmetic +operators, we convert all three expressions into tensors. However, +the situation gets tricky if one of 𝑒𝑥𝑝𝑟2 or 𝑒𝑥𝑝𝑟3 is still a conditional +operator. We call those operators sequential conditional operators. +Sequential conditional operators may contain many conditions, +where each element in a tensor may have quite different decision +paths. The complexity of decision paths makes it difficult to con- +vert those operators into tensor operators. Those frequent if-else +statements perform poorly on hardware devices such as GPUs and +ASICs. Sequential conditional operators are the most delicate, and +we defer their discussion later. +3.1.2 +Conditional operators representation. We analyze those widely +used CML models and find that sequential conditional operators +mainly occur in tree-based models. So we use decision tree as an +example to introduce the representation of conditional operators in +detail, as shown in Fig. 3. We use the combination of DL operators +to represent those sequential conditional operators. +The left is a decision tree. The input data is a list of samples; +each has many features. 𝐼 refers to internal nodes, numbered in the +order of Level Order Traversal. Each internal node is a conditional +operator, making a comparison between a feature 𝐹𝑗 and a constant +threshold 𝑇𝑖. 𝐿 refers to leaf nodes, numbered in the order of In- +Order Traversal. Each leaf node is an assignment operator, reaching +which node determines the final result. +The right in Fig. 3 shows the operator representation, whose +definitions and properties of weights are shown in Table 2. Input +data multiplied by 𝑊1 returns those features used in internal nodes +in an appropriate order. Comparing with 𝑊2 returns the choice of +each internal node: 0 means left and 1 means right. These choices +are multiplied by 𝑊3 and then use 𝑎𝑟𝑔𝑚𝑎𝑥 to return the first index +of the maximum values for each row. For each sample 𝑥𝑘, that index +is the leaf node 𝑥𝑘 reaches, as proved in appendix A. +3.1.3 +The features of CML operator representations. As described +above, we represent CML operators in the format of operator rep- +resentations. These operator representations have unique features +different from operators in DL models. +First, the weights of DL operators and CML operator represen- +tations have different meanings. The weights in DL models are all +learnable parameters. Without approximate optimizations such as +pruning and quantization, those weights are dense, and the data +type (dtype) should be float32 to ensure accuracy. Many weights of +CML operator representations have other meanings, such as repre- +senting the structure of conditional operators. Those weights are +sparse and can naturally be expressed as low-precision dtypes such +3 + +Xu Wen et al. +Table 1: The summary of operator representation. Each operator representation represents a CML operator. Scalars are marked +as lower-case letters, while tensors are marked as upper-case letters. EW is short for element-wise. +CML operators in scalar format +Operator Representation in tensor format +Operator Type +Expressions +Operator Type +Expressions +Assignment +𝑥1 ← 𝑣1; 𝑥2 ← 𝑣2; ...; 𝑥𝑛 ← 𝑣𝑛 +Assignment +𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑉 = [𝑣1, 𝑣2, ..., 𝑣𝑛]; 𝑋 ← 𝑉 +Swap +𝑥1 ← 𝑥2; 𝑥2 ← 𝑥1; +Reorganization +𝑋 = [𝑥1,𝑥2]; 𝑟𝑒𝑠ℎ𝑎𝑝𝑒(𝑋); +Basic Arithmetic +𝑥1 + 𝑦1; 𝑥2 + 𝑦2; ...; 𝑥𝑛 + 𝑦𝑛 +EW Arithmetic +𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑌 = [𝑦1,𝑦2, ...,𝑦𝑛]; 𝑋 + 𝑌 +Aggregation +𝑠𝑢𝑚(𝑥1,𝑥2, ...,𝑥𝑛) +Reduction +𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑠𝑢𝑚(𝑋) +Comparison +𝑥1 < 𝑦1; 𝑥2 < 𝑦2; ...; 𝑥𝑛 < 𝑦𝑛 +EW Comparsion +𝑋 = [𝑥1,𝑥2, ...,𝑥𝑛]; 𝑌 = [𝑦1,𝑦2, ...,𝑦𝑛]; 𝑋 < 𝑌 +Conditional +𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2 𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3 +Described in Section 3.1.2 +F5 < T1 +F1 < T2 +F4 < T3 +L2 +F2 < T4 +L1 +L3 +L4 +L5 +True +False +I1 +I2 +I3 +I4 +F5 < T1 +F1 < T2 +F4 < T3 +L2 +F2 < T4 +L1 +L3 +L4 +L5 +True +False +I1 +I2 +I3 +I4 +matmul +greater +matmul +argmax +W1 +W2 +W3 +Input +Output +matmul +greater +matmul +argmax +W1 +W2 +W3 +Input +Output +Figure 3: An example of conditional operator representation in decision tree, a typical classical machine learning model. 𝐹, 𝑇, +𝐼, and 𝐿 refer to features, thresholds, internal nodes, and leaf nodes. 𝑊1, 𝑊2, and 𝑊3 are the weights of DL operators, whose +definitions and properties are shown in Table 2, matmul is short for matrix multiplication. +Table 2: The properties of weights in Fig. 3. 𝑁𝑆, 𝑁𝐹 , 𝑁𝐼 , and +𝑁𝐿 refer to the number of samples, features, internal nodes, +and leaf nodes, respectively. 𝐼𝑛𝑝𝑢𝑡 ∈ R𝑁𝑆×𝑁𝐹 means 𝑁𝑆 sam- +ples, each has 𝑁𝐹 features. 𝑊1 ∈ {0, 1}𝑁𝐹 ×𝑁𝐼 captures the re- +lationship between features and internal nodes. 𝑊2 ∈ R𝑁𝐼 +is the thresholds used in internal nodes. 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿 +represents the structure between internal nodes and leaf +nodes. 𝑂𝑢𝑡𝑝𝑢𝑡 ∈ N𝑁𝑆 returns the leaf node index each sam- +ple reaches. Dtype is the data type of weights. Sparsity is the +ratio of non-zero data to all data in weights. +Definition +Dtype +Sparsity +𝑊1[𝑖][𝑗] = +� 1, 𝐹𝑖 ∈ 𝐶𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛(𝐼𝑗) +0, otherwise +bool +1 +𝑁𝐹 +𝑊2[𝑖] = 𝑇ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑(𝐼𝑖) +float32 +1 +𝑊3[𝑖][𝑗] = +� 0, 𝐿𝑗 ∈ 𝐿𝑒𝑓 𝑡𝑆𝑢𝑏𝑇𝑟𝑒𝑒(𝐼𝑖) +1, otherwise +bool +[ 1 +2, 1 − +1 +𝑁𝐿 ] +as bool. The natural sparse features bring optimizations described +in Section 4.3.2. +Second, the frequent operators in DL and CML are not the same. +Almost all operators in DL take float32 as input and return float32 +as output. CML uses many comparison operators, such as 𝑙𝑒𝑠𝑠, +𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟, which rarely occur in DL models. Those com- +parison operators take float or integer as input and return bool +tensors, bringing remarkable changes in the dtype of input and +output, which can be used to make optimizations as described in +Section 4.3.1. Both DL and CML models use indices operators, which +compare input and returns indices, such as 𝑎𝑟𝑔𝑠𝑜𝑟𝑡 and 𝑎𝑟𝑔𝑚𝑎𝑥. +Those indices operators have mathematical properties that can +be used to make graph-level optimizations, as described in Sec- +tion 4.3.3. These optimizations can be ignored in DL models with +dozens or hundreds of layers but are helpful for those CML models +with fewer layers. +3.2 +Extended Computational Graph +This section introduces extended computational graph (ECG), which +organizes operator representations in an optimization-friendly way +and can be used to represent CML models. ECG is an extension +based on DL computational graph. In general, a DL computational +graph is represented as a directed graph where nodes represent +operations on tensors or program inputs and edges represent data +dependencies between operations [7]. From a perspective of the DL +frameworks and compilers, computational graphs are dense and +float32 by default, such as neural network models. Using approxi- +mate optimizations like pruning and quantization brings sparse and +low-precision data to all operators and weights. These optimiza- +tions cause a decrease in accuracy and bring extra computation, +such as calibration. When we convert CML operators to operator +representations, part of those converted operators and weights are +sparse and low-precision naturally. Using DL computational graphs +to represent CML models directly is not precise enough and ignores +many optimization opportunities due to the data type and sparse +features. So we extend the computational graph in the DL systems +into extended computational graph (ECG) as the unified abstraction +for CML models. +Before introducing ECG, first, we present more details about +data type (dtype) and sparsity. We define the partial order relation +for dtypes used in our work: +𝑓 𝑙𝑜𝑎𝑡32 > 𝑖𝑛𝑡32/𝑓 𝑙𝑜𝑎𝑡16 > 𝑖𝑛𝑡16 > 𝑖𝑛𝑡8 > 𝑖𝑛𝑡4 > 𝑏𝑜𝑜𝑙 +4 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +Table 3: Operators used in ECGs +Operator Type +Examples +Comparison +less, equal, greater, less_equal +Indices +argmax, argmin, argsort, argwhere +Monotonic +sigmoid, softmax, relu, tanh, exp +Reduction +sum, max, min, avg, all, any +Arithmetic +gemm, conv, pool +The lower dtype can be converted into a higher dtype without +accuracy loss, while a backward conversion with accuracy loss is +forbidden. Using lower dtype computation, such as int8 matmul, +can speed up and reduce memory usage. However, there are many +limitations to dtype optimization. For example, the inputs of the +same operator should have the same dtype; thus, the dtype of opera- +tors depends on the largest dtype of inputs. Besides, many hardware +devices have extended instructions based on specific dtypes. For +example, an Intel processor speeds up int8 computation using AVX +instruction, while bool cannot benefit from that. Considering the +complexity of dtype optimization, we add dtype as a property for +ECG. +Sparsity is defined as the ratio of non-zero data to all data. If data +sparsity is relatively small, we take it as sparse data and store it in +a compressed sparse row (CSR) format. Using sparse operators to +handle those sparse data can perform better than dense operators. +Taking advantage of sparsity influences optimization greatly, so we +add sparsity as another property for ECG. +We classify the inputs of an operator into two categories: interme- +diate results and weights. Intermediate results are other operators’ +outputs and can only be handled during runtime. Input data is the +first intermediate result in ECG, while output data is the last. Inter- +mediate results are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}. If we +want to change the dtype of intermediate results, we should add +dtype converting operator in the ECG. +Weights are model parameters that can be loaded from trained +models. Weights can be handled both during compilation and run- +time, while a proper transformation during compilation can reduce +runtime costs. Weights are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡_𝑑𝑡𝑦− +𝑝𝑒,𝑎𝑐𝑡𝑢𝑎𝑙_𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}. Smallest_dtype is the smallest dtype for +weights without accuracy loss, actual_dtype is the dtype actually +used. Smallest_dtype depends on the property of weights, while +actual_dtype is fixed based on smallest_dtype and operators. As +shown in Fig. 3, 𝑊1 represents the relationship between input fea- +tures and internal nodes for decision trees, which is a 0-1 matrix. +The smallest_dtype of 𝑊1 is bool. However, W1 is multiplied by +input data with a dtype of float32. If we choose bool as the ac- +tual_dtype, 𝑊1 will be converted to float32 during runtime. To +reduce the execution time in runtime, we should convert 𝑊1 to +float32 during compilation, so we set actual_dtype as float32 rather +than bool. +Operators are represented in the form of {𝑤𝑒𝑖𝑔ℎ𝑡𝑠, 𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒_ +𝑟𝑒𝑠𝑢𝑙𝑡𝑠, 𝑢𝑠𝑒_𝑠𝑝𝑎𝑟𝑠𝑒, 𝑡𝑦𝑝𝑒, 𝑑𝑡𝑦𝑝𝑒, 𝐷𝐿_𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟}. Weights and in- +termediate_results are inputs of operators. Use_sparse is a flag of +whether using the sparse operator or not, which is closely related +to sparse operator replacing optimization described in Section 4.3.2. +Operator type is the type of operator. As shown in Table 3, we +Table 4: Supported Algorithms +Preprocessing Algorithms +Binarizer, LabelBinarizer, Normalizer, MaxAbsScaler, +MinMaxScaler, StandardScaler, RobustScaler, +PolynomialFeatures, LabelEncoder +Feature Selectors +SelectKBest, VarianceThreshold +Linear Models +LogisticRegression, LogisticRegressionCV, Perception, +RidgeClassifier, RidgeClassifierCV, SGDClassifier, +LinearRegression, Ridge, RidgeCV, SGDRegressor +Tree-based Models +DecisionTreeClassifier, DecisionTreeRegressor, +ExtraTreeClassifier, ExtraTreeRegressor, +RandomForestClassifier, RandomForestRegressor, +ExtraTreesClassifier, ExtraTreesRegressor, +GradientBoostingClassifier, GradientBoostingRegressor +Support Vector Machines +LinearSVC, LinearSVR, NuSVR, SVR +divide operators used in ECG into five categories. Comparison op- +erators refer to those operators that compare two tensors and return +bool tensors. Indices operators refer to those operators that return +tensors’ indices based on specific conditions. Those two kinds of +operators are dtype-lowering operators, the output dtype of which +is smaller than the input. Models without those operators, such as +most DL models, use the same dtype through the whole graphs, +where dtype optimizations cannot be used without approximate op- +timization. CML models make much use of those operators, which +have wide usage of dtype rewriting optimization described in Sec- +tion 4.3.1. Monotonic operators refer to those operators who meet +the following conditions: +∀𝑥1 ≤ 𝑥2 =⇒ 𝑓 (𝑥1) ≤ 𝑓 (𝑥2) +A series of monotonic operators followed by an indices operator +is mathematically equivalent to the indices operators alone. Those +properties provide more optimizations, as described in Section 4.3.3. +Reduction operators calculate aggregates over input. Arithmetic +operators refer to other arithmetic calculations. Operator dtype is +the operators’ data type, such as int8 matmul or float32 matmul. +Operator dtype depends on the dtype of weights and intermedi- +ate_results. DL_operator is the native definition of operators in +DL computational graphs, which we use to translate ECG to DL +computational graphs. +3.3 +Supported Algorithms and Extension for +Other Algorithms +CMLCompiler supports 35 CML algorithms nowadays, as shown +in Table 4, covering most of the popular CML algorithms [34]. Our +work can also be extended to other algorithms, such as clustering +and matrix decomposition. Most CML algorithms use operators +categorized in Section 3.1.1, each of which can be converted to cor- +responding Operator Representations—our low-level abstractions, +guaranteeing our extensibility. We take Kmeans as an example. +5 + +Xu Wen et al. +Operator Converter +CMLCompiler +Model Parser +Graph Optimizer +Operator Representation +Extended Computational Graph +Optimized ECG +Unified Abstractions +Graph Translator +Figure 4: The CMLCompiler architecture. +Kmeans use basic arithmetic operators to calculate the distance +between nodes, which can be converted to element-wise arithmetic +operators and use aggregation operators to make clustering, which +can be converted to reduction operators. When all operators of a +CML algorithm are converted to Operator Representations, it can +utilize our work to compile and make optimizations. +4 +DESIGN AND IMPLEMENTATION +This section illustrates the design and implementation of CMLCom- +piler, as shown in Fig. 4. We build our framework based on the +two unified abstractions, including four parts. Operator Converter +converts CML operators into operator representations, as shown in +Section 4.1. Model Parser organizes those operator representations +in an optimization-friendly way and uses ECGs to represent CML +models, as shown in Section 4.2. Graph Optimizer makes graph +level optimizations, as described in Section 4.3. An optimized ECG +is converted into a DL computational graph by Graph Translator +in Section 4.4. DL frameworks or compilers take DL computational +graphs as input and make more optimizations, compiling them into +executable modules to deploy. Section 4.5 shows the mixture usage +of CML and DL. Section 4.6 shows the implementation details. +4.1 +Operator Converter +Operator Converter traverses the operators in CML models and +converts them into operator representations, respectively. Opera- +tors based on matrices and arrays are converted into DL operators +directly. Scalar-based operators are converted into DL operators +based on their categories, according to Section 3.1. These converted +DL operators are wrapped into operator representations. +4.2 +Model Parser +Model Parser converts operator representations into an ECG, as +shown in Algorithm 1. Operators in an operator representation are +initialized as nodes in an ECG, the data structure of which is defined +in Section 3.2. Operator.weights and operator.intermediate_results +are set according to data dependencies, and edges are built be- +tween nodes. Operator.use_sparse and operator.dtype are set as +False and Unknown, respectively. Operator.type is set according to +operator type, which is defined in Table 3. Then weights and inter- +mediate_result are initialized. Weight.sparsity is set as the ratio of +non-zero data and all data for weight, known during compilation. +Weight.smallest_dtype is set as the smallest dtype without accuracy +loss, and weight.actual_dtype is initialized the same. Intermedi- +ate_result.sparsity and intermediate_result.dtype are set according +to operator. When all operators are visited, the ECG is established. +Algorithm 1 Model Parser +Input: Operator Representation +Output: Extended Computational Graph 𝐸𝐶𝐺 +for operator in Operator Representation do +Initialize operator as ECG node +Set operator.weights and operator. intermediate_results ac- +cording to data dependencies and build edges between nodes +operator.use_sparse ← False +operator.type ← operator type +operator.dtype ← Unknown +for weight in operator.weights do +weight.sparsity ← the ratio of non-zero data and all data +weight.smallest_dtype ← the smallest dtype without accu- +racy loss +weight.actual_dtype ← weight.smallest_dtype +end for +for ir in operator.intermediate_results do +set ir.sparsity and ir.dtype according to operator +end for +end for +4.3 +Graph Optimizer +Graph Optimizer performs graph-level optimizations, using a func- +tionally equivalent transformation for ECGs. These optimizations +are based on the features of CML models and do not influence accu- +racy. There are three specific graph rewriting optimizations: dtype +rewriting, sparse operator replacing, and redundant elimination. +4.3.1 +Dtype rewriting. Dtype rewriting uses low precision compu- +tation with faster speed and less memory to replace high precision +computation. As analyzed in Section 3.1.3, many weights used in +CML can be represented as bool or int8. Besides, comparison opera- +tors and indices operators widely used in CML are dtype-lowering +operators. The intermediate results after those operators are bool or +int8. When intermediate data and weights can be both expressed as +low precision dtype, the corresponding operators can be converted +into low precision computation as well. +As shown in Fig. 5a, the top is the ECG of decision trees before +optimization; many details are hidden. Weight 𝑊3 represents the +relationship between leaf nodes and internal nodes for decision +trees, which is a matrix only containing 0 and 1. The smallest_dtype +of 𝑊3 is bool. The output of 𝑔𝑟𝑒𝑎𝑡𝑒𝑟 operator has a dtype of bool +as well. So the following matrix multiplication (matmul) operator +can use a dtype of bool rather than float32. Intel processors speed +up int8 computation using AVX instruction, while bool cannot +benefit from that feature. So we convert the dtype of matmul to +int8 according to hardware specification. In Fig. 5a, the below is +the ECG after graph rewriting. Those white weights and operators +use float32, while gray weights and operators use int8. +6 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +float32 +int8 +(a) Dtype Rewriting +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +matmul +greater +matmul +argmax +W1 +W2 +W3 +input +out +dense +sparse +(b) Sparse Operator Replacing +matmul +add +softmax +argmax +W1 +W2 +input +out +matmul +add +argmax +W1 +W2 +input +out +redundant operator +(c) Redundant Elimination +Figure 5: Graph rewriting optimizations. Dtype rewriting converts float32 operators and weights into low-precision. Sparse +operator replacing converts dense operators and weights into sparse. Redundant elimination reduces redundant operators. +Now we introduce the dtype rewriting principle in detail. Algo- +rithm 2 shows the procedure of dtype rewriting: +(1) Visit all operators in ECG. For each operator, dtype is set as the +largest dtype of all inputs. After that, operator dtype is converted to +the dtype which can utilize hardware’s SIMD instructions best. We +keep a list of hardware specifications to modulate operator dtype. +In order to guarantee accuracy, dtype cannot get smaller. Then we +modulate operator implementation based on operator dtype. +(2) When operator dtype is fixed, we set the input dtype. The +dtype of weights is set the same as the operator, reducing dtype +conversion in runtime. The dtype of intermediate results cannot be +converted during compilation. So we add dtype converting operator, +.i.e, cast, before the operator. +We explain the differences between dtype rewriting for CML +models and model quantization for DL models. Quantization is an +approximate algorithm for DL models that causes a decrease in +accuracy and brings extra computation, such as calibration. Dtype +rewriting for CML models is based on the properties of CML, con- +verting dtype of operators and weights with no accuracy decrease +and extra computation. +Algorithm 2 Dtype Rewriting +Input: ECG 𝐺, hardware configuration 𝐻 +Output: Optimized ECG 𝐺′ +for operator in 𝐺 do +operator.dtype ← largest dtype in operator.weights and oper- +ator.intermediate_results +Modulate operator.dtype based on 𝐻 +Modulate operator.DL_operator based on operator.dtype +for weight in operator.weights do +weight.actual_dtype ← operator.dtype +end for +for data in operator.intermediate_results do +if data.dtype < operator.dtype then +Add cast(data, operator.dtype) before operator +end if +end for +end for +4.3.2 +Sparse operator replacing. Replacing dense operators with +sparse operations can speed up as well. Algorithm 3 shows the +procedure of sparse operator replacing. The sparsity of input data +can be known until runtime, while the sparsity of weights can +be known during compilation. So we convert the data format of +weights rather than input data. Different hardware devices have +different support for sparse operators. For example, CPUs can ben- +efit from sparse computation while GPUs have little effect. So we +set a threshold based on hardware specification. If weight sparsity +is smaller than the threshold, we store it in a compressed sparse +row (CSR) format. Then we convert the corresponding operator +into a sparse implementation. An example is shown in Fig. 5b, we +convert 𝑊1 and the corresponding matmul to sparse. +Algorithm 3 Sparse Operator Replacing +Input: ECG 𝐺, Threshold 𝑇 +Output: Optimized ECG 𝐺′ +for operator in 𝐺 do +for weight in operator.weights do +if weight.sparsity < 𝑇 then +Store weight into CSR format +operator.use_sparse ← True +Convert operator.DL_operator into sparse implementa- +tion +end if +end for +end for +4.3.3 +Redundant elimination. Redundant elimination eliminates +those operators who do not influence final results due to their math- +ematical properties. For example, a series of monotonic operators +followed by an indices operator is mathematically equivalent to +the indices operators alone. Algorithm 4 shows the procedure of +redundant elimination. For each operator in ECGs, we check its +operator type. If another monotonic operator follows a monotonic +operator, we fuse them. We eliminate the monotonic operator if it +is followed by an indices operator. An example is shown in Fig. 5c, +the softmax before argmax is eliminated. +4.4 +Graph Translator +Graph Translator converts the optimized ECG into DL computa- +tional graph, choosing the proper implementation based on ECG +7 + +Xu Wen et al. +Algorithm 4 Redundant Elimination +Input: Extended Computational Graph 𝐺 +Output: Optimized ECG 𝐺′ +for operator in 𝐺 do +if operator.type == "monotonic" then +Check the next operator operator’ +if operator’.type == "monotonic" then +Merge operator and operator’ +else if operator’.type == "indices" then +Eliminate operator +end if +end if +end for +DL models +CML models +Single ECG for hybrid models +Cross-framework implementation +Figure 6: CMLCompiler uses a single ECG to represent CML +and DL mixed pipeline. +and hardware specification information. DL frameworks or compil- +ers, like TVM, take DL computational graphs as input and make +more optimizations, finally compiling them into executable mod- +ules. +4.5 +Hybrid Deployment of CML and DL with a +Unified Framework +We convert those CML and DL hybrid applications under a unified +framework to reduce the cost of switching frameworks and provide +an opportunity for end-to-end optimizations, as shown in Fig. 6. We +load models from PyTorch and sklearn and convert them into ECG +subgraphs. We build edges according to data dependency and merge +those subgraphs in a single ECG. Then we can use optimizations +both in our work and DL compilers. Finally, we compile and deploy +it on diverse hardware devices. +4.6 +Implementation +Due to the benefits in portability and performance, we implement +CMLCompiler on the basis of TVM. The intermediate representa- +tions and transforms are all written in python. We read trained +models from CML frameworks such as sklearn and convert them +into operator representations, implementing them in the format +of TVM relay functions and storing their weights in TVM arrays. +We wrap those relay functions in the format of ECGs. After opti- +mizations in Section 4.3, we convert ECGs into TVM’s IRModules. +Then we utilize TVM to make more optimizations and compile to +executable modules based on specific hardware targets. We use +cross-compilation to support a broad spectrum of hardware devices. +We deploy them on lightweight runtime based on TVM runtime +and make inference on various hardware devices. +5 +EVALUATION +This section summarizes the evaluation. Section 5.1 shows experi- +mental setup. Section 5.2 evaluates the performance of graph rewrit- +ing optimizations based on ECGs. Section 5.3 compares our work +with the state-of-the-art frameworks. Section 5.4 evaluates the hy- +brid deployment of CML and DL. +5.1 +Experimental Setup +We deploy a server node equipped with two Xeon E5-2620 V3 +(Haswell) CPUs, an Nvidia Titan RTX GPU, and 64 GB memory to +conduct the experiments on CPU and GPU. Each CPU contains six +physical cores. The GPU contains 4608 Cuda cores and 24 GB mem- +ory. The operating system is Ubuntu 16.04, and the other software +includes TVM 0.8, PyTorch 1.8.1, hummingbird 0.3.1, scikit-learn +1.0.1, and CUDA 10.2. For the IoT experiments, we use Raspber- +rypi4b with Raspbian 10 operating system and deploy the above +software with the same version. We use YearPrediction [12] as the +dataset, with 515345 samples and 90 features. We use 80% data to +train models and 20% data to make inference. We run all the exper- +iments five times and use the average as the final results. We test +hummingbird [30] using both two backends (PyTorch and TVM) +and select their best results. +5.2 +Optimizations +This section evaluates graph rewriting optimizations based on +ECGs, as described in Section 4.3. These optimizations: dtype rewrit- +ing, sparse operator replacing, and redundant elimination, can work +together and produce cumulative optimization effects. They can +also coexist with the optimizations in TVM. We choose four typ- +ical tree models: DecisionTreeClassifier, RandomForestClassifier, +ExtraTreeClassifier, and ExtraTreesClassifier, as well as two typical +linear models: LogisticRegression and SGDClassifier. We evaluate +the dtype rewriting and sparse operator replacing for tree models, +and redundant elimination for linear models according to their +unique patterns. +Fig. 7a shows the result on CPU. For tree models, using our work +without optimizations has a 1.31x-2.54x speedup compared with +sklearn; this is due to our abstractions which utilize optimizations +of TVM, including better utilization of SIMD instructions and multi +cores. Using dtype rewriting and sparse operator replacing bring +1x-1.21x and 1.26x-1.75x speedup, respectively, achieving 1.27x- +2.11x speedup together, 1.84x-4.44x faster than sklearn. For linear +models, our work without optimizations runs slower than sklearn. +However, using redundant elimination brings 1.22x-1.51x speedup; +the result after our optimizations is 1.06x-1.14x faster than sklearn. +Fig. 7b shows the result of IoT devices. Note that sklearn lacks +enough support for IoT devices. For example, 64-bit tree models +trained on servers cannot be executed on Raspberrypi4b with a +32-bit operating system. Retraining those models in 32-bit format +on Raspberrypi4b from scratch takes more time, so we regard those +models as unsupported, marked as cross. So we take our work with- +out optimizations as the baseline. Using dtype rewriting and sparse +operator replacing bring 1.01x-1.33x and 1.23x-2.3x speedup, respec- +tively, achieving 1.49x-2.53x speedup together. For linear models, +8 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +���������������������� +���������������������� +� ����������������� +� ������������������ +������������������ +������������� +� +� +� +������� +������� +���� +�� +������ +�� +(a) CPU +���������������������� +���������������������� +� ����������������� +� ������������������ +������������������ +������������� +� +� +� +������� +������� +���� +�� +������ +�� +(b) Raspberrypi4b +Figure 7: Graph Rewriting Optimizations. "base" means our work without optimizations. "DR" means only using dtype rewrit- +ing. "DR+SOR" means using both dtype rewriting and sparse operator replacing. "RE" means using redundant elimination. +our work without optimizations achieves 1.71x-1.84x speedup. Us- +ing redundant elimination brings 1.08x-1.14x more speedup, 1.95x- +1.98x faster than sklearn. The computation part of GPU is less than +20%, so those optimizations play a limited role on GPU. In conclu- +sion, CML models can benefit from both TVM’s optimizations and +our optimizations and achieve obvious speedup. +5.3 +Overall Results +This section evaluates 14 typical CML algorithms covering prepro- +cessing algorithms, linear models, tree-based models, and SVMs, +on CPU, GPU, and IoT devices, compared with state-of-the-art +frameworks including sklearn, intel extension for sklearn [20], and +hummingbird. It contains two parts: batch experiments for all data +and query experiments for a single record. +The differences between the accuracy of CMLCompiler and +sklearn are all less than 1 × 10−5, which means that our work +does not affect the accuracy. The outputs on different hardware +are all the same, so we focus on performance hereinafter. Table 5 +shows the performance of batch experiments. On CPU, our work +reflects the best performance on 12 algorithms out of 14, achieving +1.02x-10.57x speedup compared with sklearn, 1.14x-4.38x speedup +compared with hummingbird, and 1.44x-8.47x speedup compared +with intel sklearn. On GPU, our work achieves competitive perfor- +mance compared with hummingbird. Our work performs better +on 11 algorithms out of 14, with a 1.11x-3.31x speedup. On an IoT +device Raspberrypi4b, our work performs better on 13 algorithms +out of 14, with a 1.28x-5.09x speedup. +Table 6 shows the performance of query experiments for a single +record. On CPU, our work achieves the best performance on 11 +algorithms out of 14, with a 1.36x-170.68x speedup compared with +sklearn, a 1.56x-4.47x speedup compared with hummingbird, and +a 1.31x-169.43x speedup compared with intel sklearn. Our work +has better performance on GPU on 10 algorithms out of 14 com- +pared with hummingbird, with a 1.41x-4.64x speedup. Our latency +on Raspberrypi4b does not differ much compared with sklearn. +However, we perform better in model support. +In conclusion, we have advantages in both batch and query ex- +periments for all three hardware devices. Many models in sklearn +only support a single core and cannot fully utilize the SIMD in- +structions. We perform better than sklearn and intel sklearn due +to better utilization of multi cores and SIMD instructions through +compilation. Hummingbird uses both PyTorch and TVM as back- +ends, where TVM performs better in most cases of our evaluations. +It implements models in PyTorch and converts them into TVM +using 𝑓 𝑟𝑜𝑚_𝑝𝑦𝑡𝑜𝑟𝑐ℎ API. This conversion is not direct and effi- +cient enough, causing a performance decrease. Besides, hardware +information is missed during conversion, which limits the optimiza- +tions of TVM for hummingbird. We map ECGs into relay opera- +tors directly and select the most efficient implementation based on +ECGs and hardware specification information. Additionally, our +abstractions bring more optimizations, as described in Section 4.3, +bringing up to 2.53x speedup, working together to achieve better +performance. +5.4 +Hybrid Deployment of CML and DL +This section shows three hybrid deployment cases of CML and DL. +As the baselines, without a unified framework, a DL framework +is used to implement DL algorithms, while a CML framework is +used to implement CML algorithms. Our work converts CML and +DL models into a single ECG, making optimizations and compiling +to diverse hardware devices. We test the latency of a single query, +which is essential in real-world applications. +5.4.1 +Sentence Sentiment Classification. The first one is a sentence +sentiment classification case, which uses Bert to embed English +sentences and logistic regression to make a classification [36]. We +use BERT-tiny [3] as the pre-trained Bert model and SST2 [40] +as the dataset. The baseline implements BERT-tiny in pytorch- +transformers [45] and logistic regression in sklearn. The result is +shown in Fig .8a. Our work achieves a 1.67x speedup on server +CPUs. Pytorch-transformers cannot be installed on IoT devices, so +the baseline cannot run on Raspberrypi4b. The latency of our work +on Raspberrypi4b is 18 milliseconds, which is acceptable in most +use cases. +5.4.2 +Radiographic Image Analysis. The second case uses Deep +Hybrid Learning [38] to analyze radiographic images, which uses +9 + +Xu Wen et al. +Table 5: Execution time for batch experiments over all data on CPU (12 cores), GPU, and IoT devices (take Raspberrypi4b +as an example) in milliseconds. SK, HB, and Intel is short for scikit-learn, hummingbird, and Intel extension for sklearn, +respectively. "-" means unsupported. +Algorithm +CPU +GPU +IOT +SK +HB +Intel +Our +HB +Our +SK +Our +Binarizer +97 +31 +77 +9 +19 +6 +634 +126 +Normalizer +25 +33 +15 +15 +7 +5 +241 +168 +MinMaxScaler +19 +31 +13 +8 +21 +6 +199 +148 +RobustScaler +28 +32 +25 +12 +19 +5 +343 +156 +LinearRegression +12 +18 +4 +6 +6 +7 +61 +116 +LogisticRegression +98 +104 +137 +86 +7 +7 +1889 +952 +SGDClassifier +94 +98 +139 +88 +9 +7 +1886 +969 +DecisionTreeClassifier +33 +48 +23 +16 +7 +5 +- +99 +DecisionTreeRegressor +7 +19 +3 +15 +7 +6 +- +211 +RandomForestClassifier +2130 +885 +2003 +601 +20 +- +- +5820 +ExtraTreeClassifier +29 +- +26 +16 +- +6 +- +206 +ExtraTreesClassifier +10022 +2522 +9421 +2256 +99 +- +- +47959 +LinearSVC +92 +122 +152 +77 +9 +6 +1896 +930 +LinearSVR +39 +26 +34 +5 +6 +5 +323 +112 +Table 6: Latency for query experiments over one single record on CPU (12 cores), GPU, and IoT devices (take Raspberrypi4b +as an example) in milliseconds. The symbols are the same as Table 5. +Algorithm +CPU +GPU +IOT +SK +HB +Intel +Our +HB +Our +SK +Our +Binarizer +0.2 +0.26 +0.34 +0.09 +0.93 +0.64 +0.44 +0.59 +Normalizer +0.32 +0.26 +0.28 +0.11 +0.25 +0.68 +0.59 +0.41 +MinMaxScaler +0.15 +0.31 +0.14 +0.09 +0.91 +0.63 +0.33 +0.37 +RobustScaler +0.14 +0.22 +0.14 +0.11 +1.02 +0.72 +0.37 +0.37 +LinearRegression +0.24 +0.35 +0.32 +0.1 +0.91 +0.55 +0.52 +0.69 +LogisticRegression +0.35 +0.36 +0.29 +0.19 +3.29 +0.71 +0.67 +2.59 +SGDClassifier +0.4 +0.35 +0.29 +0.23 +2.93 +0.67 +0.68 +0.65 +DecisionTreeClassifier +0.24 +1.62 +0.27 +0.36 +3.01 +0.8 +- +0.9 +DecisionTreeRegressor +0.22 +0.22 +0.25 +0.38 +1.03 +0.72 +- +0.88 +RandomForestClassifier +103.96 +1.6 +103.2 +0.61 +2.56 +- +- +1.05 +ExtraTreeClassifier +0.23 +- +0.4 +0.47 +- +- +- +1.81 +ExtraTreesClassifier +205.27 +12.74 +204.25 +1.73 +2.41 +- +- +3.11 +LinearSVC +0.4 +0.37 +0.45 +0.19 +2.71 +0.61 +0.65 +1.07 +LinearSVR +0.31 +0.34 +0.37 +0.09 +0.91 +0.62 +0.54 +0.91 +��� +������������� +� +� +�� +�� +������������ +�������� +�������� +(a) Bert+LogisticRegression for sentence +sentiment classification +��� +������������� +� +�� +�� +�� +������������ +�������� +�������� +(b) SimpleDNN+RandomForest for +radiographic image analysis +��� +������������� +� +� +�� +�� +������������ +�������� +�������� +(c) GBDT+Wide&Deep for click through +prediction +Figure 8: The latency of a single query for CML and DL mixed pipelines. All three baselines cannot run on IoT devices. +10 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +simple DNN to make feature engineering and CML models such as +random forests to make a classification. We use CheXpert [21] as +the dataset. The baseline implements DNN in PyTorch and random +forest in sklearn. The result is shown in Fig .8b. Our work achieves a +2.3x speedup on server CPUs. The pre-trained random forest cannot +run on IoT devices, while our work solves this problem through +cross-compilation. +5.4.3 +Click Through Rate Prediction. The third case is click-through +rate prediction used in recommendation systems of our anonymous +industry partners, using GBDT [15] to extract features and the +Wide and Deep [9] models to make prediction. We use avazu 1 as +the dataset. The baseline implements GBDT in sklearn and Wide +and Deep in PyTorch. The result is shown in Fig .8c. We achieve +3.04x speedup on the server CPUs. The GBDT model in the baseline +cannot be executed on IoT devices, while our latency on IoT devices +is only 5.06 ms. +6 +RELATED WORK +CML frameworks and libraries can be divided into three categories. +(1) General-purpose solution uses one framework to support various +models. Scikit-learn [32] is the most widely used CML framework +on GitHub [33]. Spark MLlib [29] is an extension to Spark [48]. +H2O [17] uses MapReduce [11] to support both CML and DL. There +are many other works, such as Shogun [41] and RapidMiner [19]. +These frameworks only support CPU, suffering from severe perfor- +mance and portability issues. (2) Specific-purpose solution focuses +on one type of model. LibLinear [14] supports logistic regression +and linear SVM. LibSVM [5] focuses on SVMs. These works are +limited to CPUs. Some other works attempt to support various hard- +ware devices. XGBoost [6] implements gradient boosting decision +tree algorithm on CPUs and GPUs. Muhsen Owaida et al. [31] bring +XGBoost to FPGAs. Toby Sharp [39] implements decision trees and +forests on GPUs. These frameworks only support a narrowed vari- +ety of models and solve the problem of portability to a certain extent. +(3) Extension based on DL attempts to utilize DL frameworks to +support CML models. TF-DF [43] is a decision forest library based +on TensorFlow but is limited to CPUs. It’s implemented in an ad-hoc +way, losing the portability of DL frameworks. Hummingbird [30] +is a general-purpose solution based on PyTorch, adding support +for GPUs. They utilize those abstractions in DL frameworks di- +rectly without digging into the features of CML, missing many +optimization chances. +7 +CONCLUSION +This paper presented the design and implementation of CMLCom- +piler, a unified compiler for classical Machine Learning (CML) in- +ference. CMLCompiler proposed two unified abstractions: oper- +ator representations and extended computational graphs (ECGs). +Operator representations convert CML operators into tensor for- +mats, while an ECG organizes these converted operators in an +optimization-friendly way. The CMLCompiler framework performs +the conversion and graph optimization based on two unified ab- +stractions, then outputs an optimized computational graph to deep +learning compilers or frameworks. CMLCompiler also enables the +1https://www.kaggle.com/c/avazu-ctr-prediction +hybrid deployment of CML and DL with a unified framework. Our +implementations of CMLCompiler on top of TVM show the effec- +tiveness and achieve up to 4.38x speedup on CPU, 3.31x speedup +on GPU, and 5.09x speedup on IoT devices, compared to the state- +of-the-art solutions — scikit-learn, intel sklearn, and hummingbird. +Our support for CML and DL mixed pipelines achieves up to 3.04x +speedup compared with cross-framework implementations. +A +PROOF +Here we prove that 𝑎𝑟𝑔𝑚𝑎𝑥 in Fig. 3 returns the leaf node finally +reaches. 𝑁𝑆, 𝑁𝐼, and 𝑁𝐿 refer to the number of samples, internal +nodes, and leaf nodes, respectively. 𝐼 refers to internal nodes, num- +bered in the order of Level Order Traversal. 𝐿 refers to leaf nodes, +numbered in the order of In-Order Traversal. 𝑋 ∈ {0, 1}𝑁𝑆×𝑁𝐼 is +the result after comparison with 𝑊2. Each row 𝑋𝑖 ∈ {0, 1}𝑁𝐼 refers +to choices for one sample x, marked as �𝑥. 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿 can +be regarded as a list of column vector { �𝐿1, �𝐿2,..., � +𝐿𝑁𝐿}. �𝐿𝑖 ∈ {0, 1}𝑁𝐼 +represents the relationship between leaf node 𝐿𝑖 and all internal +nodes. Then we should prove that 𝑎𝑟𝑔𝑚𝑎𝑥(�𝑥 · �𝐿1, �𝑥 · �𝐿2, ..., �𝑥 · +� +𝐿𝑁𝐿) +returns the leaf x reaches, where 𝑎𝑟𝑔𝑚𝑎𝑥 returns the index of the +maximum values among the input tensor. It returns the first index +if maximum appears more than once. We assume that 𝐿𝑘 is the leaf +node x reaches. +First we prove that �𝑥 · �𝐿𝑘 is the maximum value in {�𝑥 · �𝐿1, �𝑥 · +�𝐿2, ..., �𝑥 · +� +𝐿𝑁𝐿 }. We define the path from root node 𝐼0 to 𝐿𝑘 as the +decision path of x. +𝐿𝑘 [𝑖] = +� 0, choose left in 𝐼𝑖 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑝𝑎𝑡ℎ +1, otherwise +𝑥[𝑖] = +� 0, choose left in 𝐼𝑖 +1, choose right in 𝐼𝑖 +Because x reaches 𝐿𝐾, if x[i] = 1 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑝𝑎𝑡ℎ, then +𝐿𝑘 [𝑖] = 1. DP represents decision path, right means choosing right +in internal node and left means choosing left in internal node. +�𝑥 · �𝐿𝑘 = +∑︁ +𝑖 +𝑥[𝑖] ∗ 𝐿𝑘 [𝑖] += +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 +1 ∗ 𝐿𝑘 [𝑖] + +∑︁ +𝑖, 𝑙𝑒𝑓 𝑡 𝑖𝑛 𝐼𝑖 +0 ∗ 𝐿𝑘 [𝑖] += +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 +1 ∗ 𝐿𝑘 [𝑖] += +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃 +1 ∗ 𝐿𝑘 [𝑖] + +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃 +1 ∗ 𝐿𝑘 [𝑖] += +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃 +1 ∗ 1 + +∑︁ +𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃 +1 ∗ 1 += 𝐶𝑜𝑢𝑛𝑡𝑠 𝑜𝑓 1 𝑖𝑛 �𝑥 +�𝑥 and { �𝐿1, �𝐿2,..., � +𝐿𝑁𝐿} are all 0-1 vector. Counts of 1 in �𝑥 is the maxi- +mum value of {�𝑥 · �𝐿1, �𝑥 · �𝐿2, ..., �𝑥 · +� +𝐿𝑁𝐿 }. +Then we prove that 𝑘 is the first index that returns the maximum. +We assume that there exists a leaf node 𝐿𝑡 ahead of 𝐿𝐾 which meets +the condition �𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. Now that 𝐿𝑡 is ahead of 𝐿𝑘 and +the leaf nodes are numbered in a In-Order Traversal. ∃ an internal +node 𝐼𝑖 where 𝐿𝑡 is in the left subtree of 𝐼𝑖 and 𝐿𝑘 in the right subtree +of 𝐼𝑖. X passes by 𝐼𝑖 and reaches 𝐿𝐾 in its right subtree, so x[i] = 1. +𝐿𝑡 is in the left subtree of 𝑇𝑖, so 𝐿𝑡 [𝑖]= 0, where x[i] is multiplied +11 + +Xu Wen et al. +by zero. So �𝑥 · �𝐿𝑡 < 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. Conflict with the assumption that +�𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. So 𝐿𝑘 is the first index that returns maximum. +REFERENCES +[1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey +Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Man- +junath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, +Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan +Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. +In Proceedings of the 12th USENIX Conference on Operating Systems Design and +Implementation, OSDI’16, page 265–283, USA, 2016. USENIX Association. +[2] Amazon. +The +total +cost +of +ownership +(tco) +of +ama- +zon +sagemaker. +https://pages.awscloud.com/rs/112-TZM- +766/images/Amazon_SageMaker_TCO_uf.pdf, 2020. +[3] Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers. Generalization in nli: +Ways (not) to go beyond simple heuristics, 2021. +[4] Leo Breiman. Random forests. Machine learning, 45(1):5–32, 2001. +[5] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector +machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1– +27, 2011. +[6] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In +Proceedings of the 22nd acm sigkdd international conference on knowledge discovery +and data mining, pages 785–794, 2016. +[7] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan +Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and +Arvind Krishnamurthy. Tvm: An automated end-to-end optimizing compiler +for deep learning. In Proceedings of the 13th USENIX Conference on Operating +Systems Design and Implementation, OSDI’18, page 579–594, USA, 2018. USENIX +Association. +[8] Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis +Ceze, Carlos Guestrin, and Arvind Krishnamurthy. Learning to optimize tensor +programs. Advances in Neural Information Processing Systems, 31, 2018. +[9] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, +Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. +Wide & deep learning for recommender systems. In Proceedings of the 1st workshop +on deep learning for recommender systems, pages 7–10, 2016. +[10] Scott Cyphers, Arjun K. Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew +Brookhart, Avijit Chakraborty, William Constable, Christian Convey, Leona Cook, +Omar Kanawi, Robert Kimball, Jason Knight, Nikolay Korovaiko, Varun Kumar +Vijay, Yixing Lao, Christopher R. Lishka, Jaikrishnan Menon, Jennifer Myers, +Sandeep Aswath Narayana, Adam Procter, and Tristan J. Webb. Intel ngraph: +An intermediate representation, compiler, and executor for deep learning. CoRR, +abs/1801.08058, 2018. +[11] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on +large clusters. Communications of the ACM, 51(1):107–113, 2008. +[12] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. +[13] EasonLiao. Cudatree. https://github.com/EasonLiao/CudaTree, 2022. +[14] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen +Lin. Liblinear: A library for large linear classification. the Journal of machine +Learning research, 9:1871–1874, 2008. +[15] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. +Annals of statistics, pages 1189–1232, 2001. +[16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, +Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. +Advances in neural information processing systems, 27, 2014. +[17] H2O.ai. H2o: Scalable machine learning platform. https://github.com/h2oai/h2o-3, +2022. +[18] Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro +Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law, +Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, and +Xiaodong Wang. Applied machine learning at facebook: A datacenter infras- +tructure perspective. In 2018 IEEE International Symposium on High Performance +Computer Architecture (HPCA), pages 620–629, 2018. +[19] Markus Hofmann and Ralf Klinkenberg. RapidMiner: Data mining use cases and +business analytics applications. CRC Press, 2016. +[20] Intel. +Intel® extension for scikit-learn*. +https://intel.github.io/scikit-learn- +intelex/, 2022. +[21] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, +Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, +et al. Chexpert: A large chest radiograph dataset with uncertainty labels and +expert comparison. In Proceedings of the AAAI conference on artificial intelligence, +volume 33, pages 590–597, 2019. +[22] Zhihao Jia, Oded Padon, James Thomas, Todd Warszawski, Matei Zaharia, and +Alex Aiken. Taso: optimizing deep learning computation with automatic gen- +eration of graph substitutions. In Proceedings of the 27th ACM Symposium on +Operating Systems Principles, pages 47–62, 2019. +[23] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, +Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Olek- +sandr Zinenko. Mlir: A compiler infrastructure for the end of moore’s law. arXiv +preprint arXiv:2002.11054, 2020. +[24] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A survey +of convolutional neural networks: analysis, applications, and prospects. IEEE +Transactions on Neural Networks and Learning Systems, 2021. +[25] Xiaoliang Ling, Weiwei Deng, Chen Gu, Hucheng Zhou, Cui Li, and Feng Sun. +Model ensemble for click prediction in bing search ads. In Proceedings of the 26th +international conference on world wide web companion, pages 689–698, 2017. +[26] Wei-Yin Loh. Classification and regression trees. Wiley interdisciplinary reviews: +data mining and knowledge discovery, 1(1):14–23, 2011. +[27] Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, and Bing Xiang. +Universal text representation from bert: An empirical study. arXiv preprint +arXiv:1910.07973, 2019. +[28] Larry Medsker and Lakhmi C Jain. Recurrent neural networks: design and applica- +tions. CRC press, 1999. +[29] Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkatara- +man, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris +Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, and Ameet +Talwalkar. +Mllib: Machine learning in apache spark. +J. Mach. Learn. Res., +17(1):1235–1241, jan 2016. +[30] Supun Nakandala, Karla Saur, Gyeong-In Yu, Konstantinos Karanasos, Carlo +Curino, Markus Weimer, and Matteo Interlandi. A tensor compiler for unified +machine learning prediction serving. In 14th {USENIX} Symposium on Operating +Systems Design and Implementation ({OSDI} 20), pages 899–917, 2020. +[31] Muhsen Owaida, Hantian Zhang, Ce Zhang, and Gustavo Alonso. Scalable +inference of decision tree ensembles: Flexible design for cpu-fpga platforms. In +2017 27th International Conference on Field Programmable Logic and Applications +(FPL), pages 1–8. IEEE, 2017. +[32] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, +Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron +Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Courna- +peau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-learn: +Machine learning in python. J. Mach. Learn. Res., 12(null):2825–2830, nov 2011. +[33] Fotis Psallidas, Yiwen Zhu, Bojan Karlas, Matteo Interlandi, Avrilia Floratou, +Konstantinos Karanasos, Wentao Wu, Ce Zhang, Subru Krishnan, Carlo Curino, +and Markus Weimer. Data science through the looking glass and what we found +there. CoRR, abs/1912.09536, 2019. +[34] Susmita Ray. A quick review of machine learning algorithms. In 2019 Inter- +national conference on machine learning, big data, cloud and parallel computing +(COMITCon), pages 35–39. IEEE, 2019. +[35] James Reed, Zachary DeVito, Horace He, Ansley Ussery, and Jason Ansel. torch. +fx: Practical program capture and transformation for deep learning in python. +Proceedings of Machine Learning and Systems, 4:638–651, 2022. +[36] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using +siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. +[37] Shayle R Searle and Marvin HJ Gruber. Linear models. John Wiley & Sons, 2016. +[38] Duhita Sengupta, Sk Nishan Ali, Aditya Bhattacharya, Joy Mustafi, Asima +Mukhopadhyay, and Kaushik Sengupta. Nuclear morphology optimized deep +hybrid learning (numodril): A novel architecture for accurate diagnosis/prognosis +of ovarian cancer. bioRxiv, 2020. +[39] Toby Sharp. Implementing decision trees and forests on a gpu. In European +conference on computer vision, pages 595–608. Springer, 2008. +[40] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, +Andrew Ng, and Christopher Potts. Parsing With Compositional Vector Gram- +mars. In EMNLP. 2013. +[41] Sören Sonnenburg, Gunnar Rätsch, Sebastian Henschel, Christian Widmer, Jonas +Behr, Alexander Zien, Fabio de Bona, Alexander Binder, Christian Gehl, and +Vojtěch Franc. The shogun machine learning toolbox. The Journal of Machine +Learning Research, 11:1799–1802, 2010. +[42] Shan Suthaharan. Support vector machine. In Machine learning models and +algorithms for big data classification, pages 207–235. Springer, 2016. +[43] TensorFlow. +Tensorflow +decision +forests. +https://www.tensorflow.org/decision_forests, 2022. +[44] Jake VanderPlas. Python data science handbook: Essential tools for working with +data. " O’Reilly Media, Inc.", 2016. +[45] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- +langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, +Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, +Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, +and Alexander M. Rush. Transformers: State-of-the-art natural language pro- +cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural +Language Processing: System Demonstrations, pages 38–45, Online, October 2020. +Association for Computational Linguistics. +[46] Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat +Dukhan, Kim Hazelwood, Eldad Isaac, Yangqing Jia, Bill Jia, Tommer Leyvand, +12 + +CMLCompiler: A Unified Compiler for Classical Machine Learning +Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, Andrew Tulloch, +Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian, +Sungjoo Yoo, and Peizhao Zhang. Machine learning at facebook: Understanding +inference at the edge. In 2019 IEEE International Symposium on High Performance +Computer Architecture (HPCA), pages 331–344, 2019. +[47] Doris Xin, Hui Miao, Aditya Parameswaran, and Neoklis Polyzotis. Production +machine learning pipelines: Empirical analysis and optimization opportunities. +In Proceedings of the 2021 International Conference on Management of Data, pages +2639–2652, 2021. +[48] Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, and Ion +Stoica. Spark: cluster computing with working sets. In Proceedings of the 2nd +USENIX conference on Hot topics in cloud computing, 2010. +[49] Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer +Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, +and Ion Stoica. Ansor: Generating High-Performance Tensor Programs for Deep +Learning. USENIX Association, USA, 2020. +13 + diff --git a/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/load_file.txt b/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..1ab25e5e18165c85f5ca9cacb4627b2d2f42ce98 --- /dev/null +++ b/-tFQT4oBgHgl3EQf7DaV/content/tmp_files/load_file.txt @@ -0,0 +1,1320 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf,len=1319 +page_content='CMLCompiler: A Unified Compiler for Classical Machine Learning Xu Wen Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences wenxu@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn Wanling Gao Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences gaowanling@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn Anzheng Li Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences lianzheng20g@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn Lei Wang Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences wanglei_2011@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn Zihan Jiang Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences jiangzihan@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn Jianfeng Zhan∗ Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences zhanjianfeng@ict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='cn ABSTRACT Classical machine learning (CML) occupies nearly half of machine learning pipelines in production applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Unfortunately, it fails to utilize the state-of-the-practice devices fully and performs poorly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Without a unified framework, the hybrid deployments of deep learn- ing (DL) and CML also suffer from severe performance and porta- bility issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' This paper presents the design of a unified compiler, called CMLCompiler, for CML inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We propose two unified abstractions: operator representations and extended computational graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The CMLCompiler framework performs the conversion and graph optimization based on two unified abstractions, then outputs an optimized computational graph to DL compilers or frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We implement CMLCompiler on TVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The evaluation shows CML- Compiler’s portability and superior performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' It achieves up to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='38× speedup on CPU, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31× speedup on GPU, and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09× speedup on IoT devices, compared to the state-of-the-art solutions — scikit- learn, intel sklearn, and hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our performance of CML and DL mixed pipelines achieves up to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='04x speedup compared with cross-framework implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CCS CONCEPTS Computing methodologies → Machine learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' • Computer systems organization → Real-time systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' KEYWORDS Classical Machine Learning, Deep Learning, Compiler 1 INTRODUCTION Deep learning (DL) and classical machine learning (CML), collec- tively called machine learning (ML), have played an increasingly critical role in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL refers to those neural network mod- els, such as convolutional neural networks (CNNs) [24], recurrent neural networks (RNNs) [28], and generative adversarial networks (GANs) [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Different from DL, CML represents a set of non-neural network models in ML, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', linear models [37], decision trees [26], ∗Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Hardware CPU GPU IoT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Models Linear Models Trees SVMs .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Compiler Framework Unified Abstractions CMLCompiler DL Frameworks (PyTorch) PyTorch Runtime DL compilers (TVM) TVM Runtime DL Frameworks (PyTorch) PyTorch Runtime DL compilers (TVM) TVM Runtime Figure 1: The CMLCompiler design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our contributions are highlighted in green color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' random forests [4], and support vector machines [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL stands out because of its accuracy, while CML is still widely used for lower time and energy costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Doris Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [47] analyze 3000 produc- tion ML pipelines at Google and find that 40% of them use CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Besides, many real-world applications adopt hybrid de- ployments of CML and DL [2] to guarantee high accuracy and low latency [25, 27, 36, 38], e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', DL models for feature embedding and CML models for classification or regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL compilers, like TVM [7, 10, 23], provide a structural approach to tackle the portability issue and facilitates wide deployment of DL models on a broad spectrum of devices like GPUs, FPGAs, and IoT devices and guarantees an appreciable performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL compilers use computational graphs as high-level abstractions, supporting a large variety of DL models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Meanwhile, DL compilers propose low-level abstractions such as tensor representation to generate executable code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For newborn hardware, the vendor just need to provide hardware primitives, instead of a sophisticated high per- formance library that is prohibitively costly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Based on the tensor 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='13441v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='LG] 31 Jan 2023 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' representation and computational graphs abstractions, many opti- mizations [8, 22, 49] are proposed to boost performance, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', they provide sophisticated support for CPU processor architectures as the latter has different architectures, diverse core numbers, ex- tended instructions, and cache sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, despite its popularity and importance, CML suffers from severe portability and performance issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' State-of-the-practice and state-of-the-art CML frameworks [17, 29, 32] provide ad-hoc solutions, implementing each CML model on every hardware device case by case due to the lack of unified abstractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These ad-hoc solutions raise considerable difficulties in developing a general- purpose framework and optimization techniques to achieve optimal performance for every model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' They either lack the support or only partially support various hardware devices, such as GPUs, FPGAs, and IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In addition, adding support for a model on a new hardware device needs great effort, more than several thousands of lines of codes [13], let alone hundreds or thousands of models and devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Moreover, they also face performance issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Even on the CPUs – the most popular CML platform, the performance is unsatisfactory due to the lack of specific optimizations for advanced characteristics like multi-cores and SIMD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The hybrid deployment of CML and DL models faces more severe problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our intuition is to enable CML to leverage DL’s well-defined unified abstractions and highly mature compilers, optimization technologies, and frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Unfortunately, it is not a trivial task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' There are significant distinctions in operators and models between CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL operators focus on tensors, while CML handles ar- rays, matrices, scalars, and tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL models are all neural network models, while CML models, such as decision trees and SVMs, can hardly be represented as neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Most DL models are ex- pressible as flat sequences of operations without if-statements [35], but if-statements frequently occur in CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Existing DL ab- stractions, such as tensor representation and computational graphs, can not directly represent CML operators and models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those dis- tinctions determine CML can hardly leverage the DL ecosystems directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Several efforts attempt to support CML models on DL frameworks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', TensorFlow [1] provides a CPU-based decision forest library TF-DF [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, these attempts do not solve the generality and portability issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' They only support a narrower range of models, lacking support for GPUs and IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' This paper focuses on CML inference for the first step, consid- ering its great significance that occupies nearly half of the total cost [2] and its wide applications in online serving, Internet of things (IoT), etc [18, 46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We will extend our work to CML training in the near future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 1, we propose a unified compiler, CMLCompiler, for CML inference, which enables CML to leverage the mature DL ecosystems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' At the core of CMLCompiler are two unified abstractions: operator representations and extended computational graphs (ECGs) and a compiler framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator representations convert CML operators into tensor formats, while an ECG organizes these converted operators in an optimization- friendly way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The two unified abstractions define how to convert and translate CML models into DL computational graphs, which can be recognized and executed by DL frameworks and compilers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The CMLCompiler framework consists of four modules – opera- tor converter, model parser, graph optimizer, and graph translator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The CMLCompiler framework performs the conversion and graph optimization based on two unified abstractions, then outputs an optimized DL computational graph to DL compilers or frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CMLCompiler can also optimize the mixed pipelines of CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As TVM provides portability and sophisticated optimizations, we choose to implement CMLCompiler on TVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Currently, it supports up to 35 CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' This paper makes the following contributions: We propose two unified abstractions – operator represen- tations and extended computational graphs– to represent CML operators and models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We present the design of CMLCompiler, a unified compiler for CML inference, based on these abstractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The CML- Compiler framework performs the conversion and graph optimization based on two unified abstractions, then outputs an optimized DL computational graph to DL compilers or frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CMLCompiler enables the hybrid deployment of CML and DL with a unified framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We implement CMLCompiler on top of TVM, achieving up to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='38x speedup on CPU, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31x speedup on GPU, and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09x speedup on IoT devices, compared to the state-of-the-art solutions — scikit-learn, intel sklearn, and hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our support for CML and DL mixed pipelines achieves up to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='04x speedup compared with cross-framework implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The remainder of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 2 introduces the motivation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 3 introduces unified abstractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 4 shows design and implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 5 presents our evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 6 illustrates the related work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Finally, we draw a conclusion in Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 2 MOTIVATION CML faces severe portability and performance issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 2 com- pares the performance of sklearn, the most widely used CML frame- work on GitHub [33]— against CMLCompiler leveraging DL com- pilers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We find that sklearn can not support GPUs and only supports IoT devices partially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Adding support for a new hardware device needs great effort due to the ad-hoc implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For exam- ple, adding support for random forest on GPU needs 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7k lines of code [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Many models and hardware devices need to be sup- ported, requiring hundreds or thousands of more effort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Moreover, due to the lack of compilation support for CPU’s features, sklearn has poor performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As shown in Fig .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2, CMLCompiler achieves 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3x speedup by utilizing AVX2 through compilation compared with sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Other CML frameworks such as Spark MLlib [29] and H2O [17] face the same problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our solution is to propose uni- fied abstractions to utilize DL compilers and frameworks, achieving portability and high performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CML and DL models are often deployed hybrid in NLP [36], in- telligent healthcare [38], recommendation systems [25], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', espe- cially in the scenarios with limited computational power and small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Many of them are deployed on heterogeneous hardware devices for online serving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As there is no unified system, different frameworks are deployed with three disadvantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' First, this lim- its the portability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If one framework fails on the target device, the whole pipeline corrupts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Second, there are extra costs due to data 2 CMLCompiler: A Unified Compiler for Classical Machine Learning ��� ��� ��� ������������ �� � �� � �������� ������� �������� ��� ��� ��� ���������������������� �� � �� � ������� �������� Figure 2: This figure compares the performance of sklearn, the most widely used CML framework on GitHub [33]— against CMLCompiler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our evaluation shows that sklearn suffers from both performance and portability issues for a lack of unified abstractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' conversions across frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Third, it is hard to make optimiza- tions across different frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using a unified framework can overcome these disadvantages, so we add the support for hybrid deployment of CML and DL in CMLCompiler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3 THE UNIFIED ABSTRACTIONS CMLCompiler takes CML models as input and returns DL compu- tational graphs as output, utilizing DL frameworks or compilers to compile and deploy them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' At the core of CMLCompiler are two unified abstractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator representations are used to represent CML operators in tensor format, as shown in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Extend computational graph (ECG) organizes operator representations in an optimization-friendly way and can be used to represent CML models, as shown in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 shows the supported algorithms and extensions for other algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Operator Representation An operator representation uses a combination of one or more DL operators with tensors as input and output to represent a CML oper- ator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We convert CML operators into DL operators and wrap them in the format of operator representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Data in CML has mainly four formats: arrays, matrices, scalars, and tables [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Matrices and arrays are regarded as two types of tensors whose operators can naturally be converted into DL operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' When CML models deal with tables, they take numeric data from tables and operate it, which can also be regarded as scalars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Hereby, we focus on the operators on scalars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Operator categories and corresponding representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As shown in Table 1, we classify CML operators into six categories and provide operator representations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (1) Assignment operators assign values to variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If we assign n values 𝑣1, 𝑣2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 𝑣𝑛 to n variables 𝑥1, 𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 𝑥𝑛, we organize these variables and values in two tensors 𝑋 = [𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛] and 𝑉 = [𝑣1, 𝑣2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 𝑣𝑛].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we assign tensor V to tensor X to replace n scalar assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Tensor assignments benefit memory copy which stores data in block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (2) Swap operators swap two or more variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These variables can be represented in a tensor format and use reorganization oper- ators such as 𝑟𝑒𝑠ℎ𝑎𝑝𝑒 to swap the elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (3) Basic arithmetic operators refers to those arithmetic calcu- lations based on scalars, such as 𝑎𝑑𝑑, 𝑠𝑢𝑏, 𝑚𝑢𝑙 and 𝑑𝑖𝑣.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use element-wise arithmetic operators based on tensors to replace them, which can utilize SIMD instructions better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (4) Aggregation operators refer to operators that calculate ag- gregates among many scalars, such as 𝑚𝑖𝑛, 𝑚𝑎𝑥, 𝑠𝑢𝑚, and 𝑎𝑣𝑔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Reduction operators can be used to accomplish that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (5) Comparison operators make a comparison between scalars and return True or False, such as 𝑙𝑒𝑠𝑠, 𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Compar- isons with the same operator can be represented in a tensor format and use an element-wise comparison to replace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (6) Conditional operators are used to represent if-else statements, in the form of𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3, where𝑒𝑥𝑝𝑟1 is a compar- ison operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If 𝑒𝑥𝑝𝑟2 and 𝑒𝑥𝑝𝑟3 are all assignment or arithmetic operators, we convert all three expressions into tensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, the situation gets tricky if one of 𝑒𝑥𝑝𝑟2 or 𝑒𝑥𝑝𝑟3 is still a conditional operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We call those operators sequential conditional operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sequential conditional operators may contain many conditions, where each element in a tensor may have quite different decision paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The complexity of decision paths makes it difficult to con- vert those operators into tensor operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those frequent if-else statements perform poorly on hardware devices such as GPUs and ASICs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sequential conditional operators are the most delicate, and we defer their discussion later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Conditional operators representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We analyze those widely used CML models and find that sequential conditional operators mainly occur in tree-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we use decision tree as an example to introduce the representation of conditional operators in detail, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use the combination of DL operators to represent those sequential conditional operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The left is a decision tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The input data is a list of samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' each has many features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐼 refers to internal nodes, numbered in the order of Level Order Traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Each internal node is a conditional operator, making a comparison between a feature 𝐹𝑗 and a constant threshold 𝑇𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐿 refers to leaf nodes, numbered in the order of In- Order Traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Each leaf node is an assignment operator, reaching which node determines the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The right in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3 shows the operator representation, whose definitions and properties of weights are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Input data multiplied by 𝑊1 returns those features used in internal nodes in an appropriate order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Comparing with 𝑊2 returns the choice of each internal node: 0 means left and 1 means right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These choices are multiplied by 𝑊3 and then use 𝑎𝑟𝑔𝑚𝑎𝑥 to return the first index of the maximum values for each row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For each sample 𝑥𝑘, that index is the leaf node 𝑥𝑘 reaches, as proved in appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 The features of CML operator representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As described above, we represent CML operators in the format of operator rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These operator representations have unique features different from operators in DL models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' First, the weights of DL operators and CML operator represen- tations have different meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The weights in DL models are all learnable parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Without approximate optimizations such as pruning and quantization, those weights are dense, and the data type (dtype) should be float32 to ensure accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Many weights of CML operator representations have other meanings, such as repre- senting the structure of conditional operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those weights are sparse and can naturally be expressed as low-precision dtypes such 3 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Table 1: The summary of operator representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Each operator representation represents a CML operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Scalars are marked as lower-case letters, while tensors are marked as upper-case letters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' EW is short for element-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CML operators in scalar format Operator Representation in tensor format Operator Type Expressions Operator Type Expressions Assignment 𝑥1 ← 𝑣1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥2 ← 𝑣2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥𝑛 ← 𝑣𝑛 Assignment 𝑋 = [𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑉 = [𝑣1, 𝑣2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 𝑣𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑋 ← 𝑉 Swap 𝑥1 ← 𝑥2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥2 ← 𝑥1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Reorganization 𝑋 = [𝑥1,𝑥2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑟𝑒𝑠ℎ𝑎𝑝𝑒(𝑋);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Basic Arithmetic 𝑥1 + 𝑦1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥2 + 𝑦2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥𝑛 + 𝑦𝑛 EW Arithmetic 𝑋 = [𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑌 = [𝑦1,𝑦2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑦𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑋 + 𝑌 Aggregation 𝑠𝑢𝑚(𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛) Reduction 𝑋 = [𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑠𝑢𝑚(𝑋) Comparison 𝑥1 < 𝑦1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥2 < 𝑦2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑥𝑛 < 𝑦𝑛 EW Comparsion 𝑋 = [𝑥1,𝑥2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑥𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑌 = [𝑦1,𝑦2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=',𝑦𝑛];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑋 < 𝑌 Conditional 𝑖𝑓 (𝑒𝑥𝑝𝑟1) 𝑒𝑥𝑝𝑟2 𝑒𝑙𝑠𝑒 𝑒𝑥𝑝𝑟3 Described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 F5 < T1 F1 < T2 F4 < T3 L2 F2 < T4 L1 L3 L4 L5 True False I1 I2 I3 I4 F5 < T1 F1 < T2 F4 < T3 L2 F2 < T4 L1 L3 L4 L5 True False I1 I2 I3 I4 matmul greater matmul argmax W1 W2 W3 Input Output matmul greater matmul argmax W1 W2 W3 Input Output Figure 3: An example of conditional operator representation in decision tree, a typical classical machine learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐹, 𝑇, 𝐼, and 𝐿 refer to features, thresholds, internal nodes, and leaf nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑊1, 𝑊2, and 𝑊3 are the weights of DL operators, whose definitions and properties are shown in Table 2, matmul is short for matrix multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Table 2: The properties of weights in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑁𝑆, 𝑁𝐹 , 𝑁𝐼 , and 𝑁𝐿 refer to the number of samples, features, internal nodes, and leaf nodes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐼𝑛𝑝𝑢𝑡 ∈ R𝑁𝑆×𝑁𝐹 means 𝑁𝑆 sam- ples, each has 𝑁𝐹 features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑊1 ∈ {0, 1}𝑁𝐹 ×𝑁𝐼 captures the re- lationship between features and internal nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑊2 ∈ R𝑁𝐼 is the thresholds used in internal nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿 represents the structure between internal nodes and leaf nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑂𝑢𝑡𝑝𝑢𝑡 ∈ N𝑁𝑆 returns the leaf node index each sam- ple reaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Dtype is the data type of weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sparsity is the ratio of non-zero data to all data in weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Definition Dtype Sparsity 𝑊1[𝑖][𝑗] = � 1, 𝐹𝑖 ∈ 𝐶𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛(𝐼𝑗) 0, otherwise bool 1 𝑁𝐹 𝑊2[𝑖] = 𝑇ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑(𝐼𝑖) float32 1 𝑊3[𝑖][𝑗] = � 0, 𝐿𝑗 ∈ 𝐿𝑒𝑓 𝑡𝑆𝑢𝑏𝑇𝑟𝑒𝑒(𝐼𝑖) 1, otherwise bool [ 1 2, 1 − 1 𝑁𝐿 ] as bool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The natural sparse features bring optimizations described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Second, the frequent operators in DL and CML are not the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Almost all operators in DL take float32 as input and return float32 as output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CML uses many comparison operators, such as 𝑙𝑒𝑠𝑠, 𝑒𝑞𝑢𝑎𝑙, and 𝑔𝑟𝑒𝑎𝑡𝑒𝑟, which rarely occur in DL models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those com- parison operators take float or integer as input and return bool tensors, bringing remarkable changes in the dtype of input and output, which can be used to make optimizations as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Both DL and CML models use indices operators, which compare input and returns indices, such as 𝑎𝑟𝑔𝑠𝑜𝑟𝑡 and 𝑎𝑟𝑔𝑚𝑎𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those indices operators have mathematical properties that can be used to make graph-level optimizations, as described in Sec- tion 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These optimizations can be ignored in DL models with dozens or hundreds of layers but are helpful for those CML models with fewer layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Extended Computational Graph This section introduces extended computational graph (ECG), which organizes operator representations in an optimization-friendly way and can be used to represent CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ECG is an extension based on DL computational graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In general, a DL computational graph is represented as a directed graph where nodes represent operations on tensors or program inputs and edges represent data dependencies between operations [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' From a perspective of the DL frameworks and compilers, computational graphs are dense and float32 by default, such as neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using approxi- mate optimizations like pruning and quantization brings sparse and low-precision data to all operators and weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These optimiza- tions cause a decrease in accuracy and bring extra computation, such as calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' When we convert CML operators to operator representations, part of those converted operators and weights are sparse and low-precision naturally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using DL computational graphs to represent CML models directly is not precise enough and ignores many optimization opportunities due to the data type and sparse features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we extend the computational graph in the DL systems into extended computational graph (ECG) as the unified abstraction for CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Before introducing ECG, first, we present more details about data type (dtype) and sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We define the partial order relation for dtypes used in our work: 𝑓 𝑙𝑜𝑎𝑡32 > 𝑖𝑛𝑡32/𝑓 𝑙𝑜𝑎𝑡16 > 𝑖𝑛𝑡16 > 𝑖𝑛𝑡8 > 𝑖𝑛𝑡4 > 𝑏𝑜𝑜𝑙 4 CMLCompiler: A Unified Compiler for Classical Machine Learning Table 3: Operators used in ECGs Operator Type Examples Comparison less,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' equal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' greater,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' less_equal Indices argmax,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' argmin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' argsort,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' argwhere Monotonic sigmoid,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' softmax,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' relu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' tanh,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' exp Reduction sum,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' max,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' min,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' avg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' all,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' any Arithmetic gemm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' conv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' pool The lower dtype can be converted into a higher dtype without accuracy loss,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' while a backward conversion with accuracy loss is forbidden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using lower dtype computation, such as int8 matmul, can speed up and reduce memory usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, there are many limitations to dtype optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For example, the inputs of the same operator should have the same dtype;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' thus, the dtype of opera- tors depends on the largest dtype of inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Besides, many hardware devices have extended instructions based on specific dtypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For example, an Intel processor speeds up int8 computation using AVX instruction, while bool cannot benefit from that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Considering the complexity of dtype optimization, we add dtype as a property for ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sparsity is defined as the ratio of non-zero data to all data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If data sparsity is relatively small, we take it as sparse data and store it in a compressed sparse row (CSR) format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using sparse operators to handle those sparse data can perform better than dense operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Taking advantage of sparsity influences optimization greatly, so we add sparsity as another property for ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We classify the inputs of an operator into two categories: interme- diate results and weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Intermediate results are other operators’ outputs and can only be handled during runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Input data is the first intermediate result in ECG, while output data is the last.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Inter- mediate results are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If we want to change the dtype of intermediate results, we should add dtype converting operator in the ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weights are model parameters that can be loaded from trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weights can be handled both during compilation and run- time, while a proper transformation during compilation can reduce runtime costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weights are represented as {𝑠𝑝𝑎𝑟𝑠𝑖𝑡𝑦, 𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡_𝑑𝑡𝑦− 𝑝𝑒,𝑎𝑐𝑡𝑢𝑎𝑙_𝑑𝑡𝑦𝑝𝑒, 𝑡𝑒𝑛𝑠𝑜𝑟}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Smallest_dtype is the smallest dtype for weights without accuracy loss, actual_dtype is the dtype actually used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Smallest_dtype depends on the property of weights, while actual_dtype is fixed based on smallest_dtype and operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3, 𝑊1 represents the relationship between input fea- tures and internal nodes for decision trees, which is a 0-1 matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The smallest_dtype of 𝑊1 is bool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, W1 is multiplied by input data with a dtype of float32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If we choose bool as the ac- tual_dtype, 𝑊1 will be converted to float32 during runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' To reduce the execution time in runtime, we should convert 𝑊1 to float32 during compilation, so we set actual_dtype as float32 rather than bool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operators are represented in the form of {𝑤𝑒𝑖𝑔ℎ𝑡𝑠, 𝑖𝑛𝑡𝑒𝑟𝑚𝑒𝑑𝑖𝑎𝑡𝑒_ 𝑟𝑒𝑠𝑢𝑙𝑡𝑠, 𝑢𝑠𝑒_𝑠𝑝𝑎𝑟𝑠𝑒, 𝑡𝑦𝑝𝑒, 𝑑𝑡𝑦𝑝𝑒, 𝐷𝐿_𝑜𝑝𝑒𝑟𝑎𝑡𝑜𝑟}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weights and in- termediate_results are inputs of operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Use_sparse is a flag of whether using the sparse operator or not, which is closely related to sparse operator replacing optimization described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator type is the type of operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As shown in Table 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' we Table 4: Supported Algorithms Preprocessing Algorithms Binarizer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LabelBinarizer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Normalizer,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' MaxAbsScaler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' MinMaxScaler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' StandardScaler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RobustScaler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' PolynomialFeatures,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LabelEncoder Feature Selectors SelectKBest,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' VarianceThreshold Linear Models LogisticRegression,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LogisticRegressionCV,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Perception,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RidgeClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RidgeClassifierCV,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' SGDClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LinearRegression,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Ridge,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RidgeCV,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' SGDRegressor Tree-based Models DecisionTreeClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DecisionTreeRegressor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ExtraTreeClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ExtraTreeRegressor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RandomForestClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RandomForestRegressor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ExtraTreesClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ExtraTreesRegressor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' GradientBoostingClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' GradientBoostingRegressor Support Vector Machines LinearSVC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LinearSVR,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' NuSVR,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' SVR divide operators used in ECG into five categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Comparison op- erators refer to those operators that compare two tensors and return bool tensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Indices operators refer to those operators that return tensors’ indices based on specific conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those two kinds of operators are dtype-lowering operators, the output dtype of which is smaller than the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Models without those operators, such as most DL models, use the same dtype through the whole graphs, where dtype optimizations cannot be used without approximate op- timization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CML models make much use of those operators, which have wide usage of dtype rewriting optimization described in Sec- tion 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Monotonic operators refer to those operators who meet the following conditions: ∀𝑥1 ≤ 𝑥2 =⇒ 𝑓 (𝑥1) ≤ 𝑓 (𝑥2) A series of monotonic operators followed by an indices operator is mathematically equivalent to the indices operators alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those properties provide more optimizations, as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Reduction operators calculate aggregates over input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Arithmetic operators refer to other arithmetic calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator dtype is the operators’ data type, such as int8 matmul or float32 matmul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator dtype depends on the dtype of weights and intermedi- ate_results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL_operator is the native definition of operators in DL computational graphs, which we use to translate ECG to DL computational graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 Supported Algorithms and Extension for Other Algorithms CMLCompiler supports 35 CML algorithms nowadays, as shown in Table 4, covering most of the popular CML algorithms [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work can also be extended to other algorithms, such as clustering and matrix decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Most CML algorithms use operators categorized in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1, each of which can be converted to cor- responding Operator Representations—our low-level abstractions, guaranteeing our extensibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We take Kmeans as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator Converter CMLCompiler Model Parser Graph Optimizer Operator Representation Extended Computational Graph Optimized ECG Unified Abstractions Graph Translator Figure 4: The CMLCompiler architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Kmeans use basic arithmetic operators to calculate the distance between nodes, which can be converted to element-wise arithmetic operators and use aggregation operators to make clustering, which can be converted to reduction operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' When all operators of a CML algorithm are converted to Operator Representations, it can utilize our work to compile and make optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4 DESIGN AND IMPLEMENTATION This section illustrates the design and implementation of CMLCom- piler, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We build our framework based on the two unified abstractions, including four parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator Converter converts CML operators into operator representations, as shown in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Model Parser organizes those operator representations in an optimization-friendly way and uses ECGs to represent CML models, as shown in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Graph Optimizer makes graph level optimizations, as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' An optimized ECG is converted into a DL computational graph by Graph Translator in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL frameworks or compilers take DL computational graphs as input and make more optimizations, compiling them into executable modules to deploy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 shows the mixture usage of CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 shows the implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Operator Converter Operator Converter traverses the operators in CML models and converts them into operator representations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Opera- tors based on matrices and arrays are converted into DL operators directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Scalar-based operators are converted into DL operators based on their categories, according to Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These converted DL operators are wrapped into operator representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Model Parser Model Parser converts operator representations into an ECG, as shown in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operators in an operator representation are initialized as nodes in an ECG, the data structure of which is defined in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights and operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='intermediate_results are set according to data dependencies, and edges are built be- tween nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='use_sparse and operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype are set as False and Unknown, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='type is set according to operator type, which is defined in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then weights and inter- mediate_result are initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparsity is set as the ratio of non-zero data and all data for weight, known during compilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='smallest_dtype is set as the smallest dtype without accuracy loss, and weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='actual_dtype is initialized the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Intermedi- ate_result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparsity and intermediate_result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype are set according to operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' When all operators are visited, the ECG is established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 1 Model Parser Input: Operator Representation Output: Extended Computational Graph 𝐸𝐶𝐺 for operator in Operator Representation do Initialize operator as ECG node Set operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights and operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' intermediate_results ac- cording to data dependencies and build edges between nodes operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='use_sparse ← False operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='type ← operator type operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype ← Unknown for weight in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights do weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparsity ← the ratio of non-zero data and all data weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='smallest_dtype ← the smallest dtype without accu- racy loss weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='actual_dtype ← weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='smallest_dtype end for for ir in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='intermediate_results do set ir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparsity and ir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype according to operator end for end for 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 Graph Optimizer Graph Optimizer performs graph-level optimizations, using a func- tionally equivalent transformation for ECGs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These optimizations are based on the features of CML models and do not influence accu- racy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' There are three specific graph rewriting optimizations: dtype rewriting, sparse operator replacing, and redundant elimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Dtype rewriting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Dtype rewriting uses low precision compu- tation with faster speed and less memory to replace high precision computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As analyzed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3, many weights used in CML can be represented as bool or int8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Besides, comparison opera- tors and indices operators widely used in CML are dtype-lowering operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The intermediate results after those operators are bool or int8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' When intermediate data and weights can be both expressed as low precision dtype, the corresponding operators can be converted into low precision computation as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5a, the top is the ECG of decision trees before optimization;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' many details are hidden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Weight 𝑊3 represents the relationship between leaf nodes and internal nodes for decision trees, which is a matrix only containing 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The smallest_dtype of 𝑊3 is bool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The output of 𝑔𝑟𝑒𝑎𝑡𝑒𝑟 operator has a dtype of bool as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So the following matrix multiplication (matmul) operator can use a dtype of bool rather than float32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Intel processors speed up int8 computation using AVX instruction, while bool cannot benefit from that feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we convert the dtype of matmul to int8 according to hardware specification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5a, the below is the ECG after graph rewriting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Those white weights and operators use float32, while gray weights and operators use int8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='CMLCompiler: A Unified Compiler for Classical Machine Learning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='float32 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='int8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='(a) Dtype Rewriting ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='greater ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dense ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparse ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='(b) Sparse Operator Replacing ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='softmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='matmul ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='add ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='argmax ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='W2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='redundant operator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='(c) Redundant Elimination ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Figure 5: Graph rewriting optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Dtype rewriting converts float32 operators and weights into low-precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sparse operator replacing converts dense operators and weights into sparse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Redundant elimination reduces redundant operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Now we introduce the dtype rewriting principle in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algo- rithm 2 shows the procedure of dtype rewriting: (1) Visit all operators in ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For each operator, dtype is set as the largest dtype of all inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' After that, operator dtype is converted to the dtype which can utilize hardware’s SIMD instructions best.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We keep a list of hardware specifications to modulate operator dtype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In order to guarantee accuracy, dtype cannot get smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we modulate operator implementation based on operator dtype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (2) When operator dtype is fixed, we set the input dtype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The dtype of weights is set the same as the operator, reducing dtype conversion in runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The dtype of intermediate results cannot be converted during compilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we add dtype converting operator, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='e, cast, before the operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We explain the differences between dtype rewriting for CML models and model quantization for DL models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Quantization is an approximate algorithm for DL models that causes a decrease in accuracy and brings extra computation, such as calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Dtype rewriting for CML models is based on the properties of CML, con- verting dtype of operators and weights with no accuracy decrease and extra computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 2 Dtype Rewriting Input: ECG 𝐺, hardware configuration 𝐻 Output: Optimized ECG 𝐺′ for operator in 𝐺 do operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype ← largest dtype in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights and oper- ator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='intermediate_results Modulate operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype based on 𝐻 Modulate operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='DL_operator based on operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype for weight in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights do weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='actual_dtype ← operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype end for for data in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='intermediate_results do if data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype < operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype then Add cast(data, operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='dtype) before operator end if end for end for 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Sparse operator replacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Replacing dense operators with sparse operations can speed up as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 3 shows the procedure of sparse operator replacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The sparsity of input data can be known until runtime, while the sparsity of weights can be known during compilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we convert the data format of weights rather than input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Different hardware devices have different support for sparse operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For example, CPUs can ben- efit from sparse computation while GPUs have little effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we set a threshold based on hardware specification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If weight sparsity is smaller than the threshold, we store it in a compressed sparse row (CSR) format.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we convert the corresponding operator into a sparse implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' An example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5b, we convert 𝑊1 and the corresponding matmul to sparse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 3 Sparse Operator Replacing Input: ECG 𝐺, Threshold 𝑇 Output: Optimized ECG 𝐺′ for operator in 𝐺 do for weight in operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='weights do if weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='sparsity < 𝑇 then Store weight into CSR format operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='use_sparse ← True Convert operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='DL_operator into sparse implementa- tion end if end for end for 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 Redundant elimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Redundant elimination eliminates those operators who do not influence final results due to their math- ematical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For example, a series of monotonic operators followed by an indices operator is mathematically equivalent to the indices operators alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 4 shows the procedure of redundant elimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For each operator in ECGs, we check its operator type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' If another monotonic operator follows a monotonic operator, we fuse them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We eliminate the monotonic operator if it is followed by an indices operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' An example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5c, the softmax before argmax is eliminated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 Graph Translator Graph Translator converts the optimized ECG into DL computa- tional graph, choosing the proper implementation based on ECG 7 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm 4 Redundant Elimination Input: Extended Computational Graph 𝐺 Output: Optimized ECG 𝐺′ for operator in 𝐺 do if operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='type == "monotonic" then Check the next operator operator’ if operator’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='type == "monotonic" then Merge operator and operator’ else if operator’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='type == "indices" then Eliminate operator end if end if end for DL models CML models Single ECG for hybrid models Cross-framework implementation Figure 6: CMLCompiler uses a single ECG to represent CML and DL mixed pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' and hardware specification information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DL frameworks or compil- ers, like TVM, take DL computational graphs as input and make more optimizations, finally compiling them into executable mod- ules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 Hybrid Deployment of CML and DL with a Unified Framework We convert those CML and DL hybrid applications under a unified framework to reduce the cost of switching frameworks and provide an opportunity for end-to-end optimizations, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We load models from PyTorch and sklearn and convert them into ECG subgraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We build edges according to data dependency and merge those subgraphs in a single ECG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we can use optimizations both in our work and DL compilers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Finally, we compile and deploy it on diverse hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 Implementation Due to the benefits in portability and performance, we implement CMLCompiler on the basis of TVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The intermediate representa- tions and transforms are all written in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We read trained models from CML frameworks such as sklearn and convert them into operator representations, implementing them in the format of TVM relay functions and storing their weights in TVM arrays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We wrap those relay functions in the format of ECGs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' After opti- mizations in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3, we convert ECGs into TVM’s IRModules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we utilize TVM to make more optimizations and compile to executable modules based on specific hardware targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use cross-compilation to support a broad spectrum of hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We deploy them on lightweight runtime based on TVM runtime and make inference on various hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5 EVALUATION This section summarizes the evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 shows experi- mental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 evaluates the performance of graph rewrit- ing optimizations based on ECGs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 compares our work with the state-of-the-art frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 evaluates the hy- brid deployment of CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Experimental Setup We deploy a server node equipped with two Xeon E5-2620 V3 (Haswell) CPUs, an Nvidia Titan RTX GPU, and 64 GB memory to conduct the experiments on CPU and GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Each CPU contains six physical cores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The GPU contains 4608 Cuda cores and 24 GB mem- ory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The operating system is Ubuntu 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='04, and the other software includes TVM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8, PyTorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1, hummingbird 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1, scikit-learn 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1, and CUDA 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For the IoT experiments, we use Raspber- rypi4b with Raspbian 10 operating system and deploy the above software with the same version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use YearPrediction [12] as the dataset, with 515345 samples and 90 features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use 80% data to train models and 20% data to make inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We run all the exper- iments five times and use the average as the final results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We test hummingbird [30] using both two backends (PyTorch and TVM) and select their best results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Optimizations This section evaluates graph rewriting optimizations based on ECGs, as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These optimizations: dtype rewrit- ing, sparse operator replacing, and redundant elimination, can work together and produce cumulative optimization effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' They can also coexist with the optimizations in TVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We choose four typ- ical tree models: DecisionTreeClassifier, RandomForestClassifier, ExtraTreeClassifier, and ExtraTreesClassifier, as well as two typical linear models: LogisticRegression and SGDClassifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We evaluate the dtype rewriting and sparse operator replacing for tree models, and redundant elimination for linear models according to their unique patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 7a shows the result on CPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For tree models, using our work without optimizations has a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31x-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='54x speedup compared with sklearn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' this is due to our abstractions which utilize optimizations of TVM, including better utilization of SIMD instructions and multi cores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using dtype rewriting and sparse operator replacing bring 1x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='21x and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='26x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='75x speedup, respectively, achieving 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='27x- 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11x speedup together, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='84x-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='44x faster than sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For linear models, our work without optimizations runs slower than sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, using redundant elimination brings 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='22x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='51x speedup;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' the result after our optimizations is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='06x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14x faster than sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 7b shows the result of IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Note that sklearn lacks enough support for IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For example, 64-bit tree models trained on servers cannot be executed on Raspberrypi4b with a 32-bit operating system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Retraining those models in 32-bit format on Raspberrypi4b from scratch takes more time, so we regard those models as unsupported, marked as cross.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So we take our work with- out optimizations as the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Using dtype rewriting and sparse operator replacing bring 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='01x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='33x and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='23x-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3x speedup, respec- tively, achieving 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='49x-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='53x speedup together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' For linear models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 8 CMLCompiler: A Unified Compiler for Classical Machine Learning ���������������������� ���������������������� � ����������������� � ������������������ ������������������ ������������� � � � ������� ������� ���� �� ������ �� (a) CPU ���������������������� ���������������������� � ����������������� � ������������������ ������������������ ������������� � � � ������� ������� ���� �� ������ �� (b) Raspberrypi4b Figure 7: Graph Rewriting Optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' "base" means our work without optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' "DR" means only using dtype rewrit- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' "DR+SOR" means using both dtype rewriting and sparse operator replacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' "RE" means using redundant elimination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' our work without optimizations achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='71x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='84x speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Us- ing redundant elimination brings 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='08x-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14x more speedup, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='95x- 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='98x faster than sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The computation part of GPU is less than 20%, so those optimizations play a limited role on GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In conclu- sion, CML models can benefit from both TVM’s optimizations and our optimizations and achieve obvious speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 Overall Results This section evaluates 14 typical CML algorithms covering prepro- cessing algorithms, linear models, tree-based models, and SVMs, on CPU, GPU, and IoT devices, compared with state-of-the-art frameworks including sklearn, intel extension for sklearn [20], and hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' It contains two parts: batch experiments for all data and query experiments for a single record.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The differences between the accuracy of CMLCompiler and sklearn are all less than 1 × 10−5, which means that our work does not affect the accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The outputs on different hardware are all the same, so we focus on performance hereinafter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Table 5 shows the performance of batch experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' On CPU, our work reflects the best performance on 12 algorithms out of 14, achieving 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='02x-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='57x speedup compared with sklearn, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14x-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='38x speedup compared with hummingbird, and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='44x-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='47x speedup compared with intel sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' On GPU, our work achieves competitive perfor- mance compared with hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work performs better on 11 algorithms out of 14, with a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11x-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31x speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' On an IoT device Raspberrypi4b, our work performs better on 13 algorithms out of 14, with a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='28x-5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09x speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Table 6 shows the performance of query experiments for a single record.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' On CPU, our work achieves the best performance on 11 algorithms out of 14, with a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='36x-170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='68x speedup compared with sklearn, a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='56x-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='47x speedup compared with hummingbird, and a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31x-169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='43x speedup compared with intel sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work has better performance on GPU on 10 algorithms out of 14 com- pared with hummingbird, with a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='41x-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='64x speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our latency on Raspberrypi4b does not differ much compared with sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' However, we perform better in model support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In conclusion, we have advantages in both batch and query ex- periments for all three hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Many models in sklearn only support a single core and cannot fully utilize the SIMD in- structions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We perform better than sklearn and intel sklearn due to better utilization of multi cores and SIMD instructions through compilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Hummingbird uses both PyTorch and TVM as back- ends, where TVM performs better in most cases of our evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' It implements models in PyTorch and converts them into TVM using 𝑓 𝑟𝑜𝑚_𝑝𝑦𝑡𝑜𝑟𝑐ℎ API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' This conversion is not direct and effi- cient enough, causing a performance decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Besides, hardware information is missed during conversion, which limits the optimiza- tions of TVM for hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We map ECGs into relay opera- tors directly and select the most efficient implementation based on ECGs and hardware specification information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Additionally, our abstractions bring more optimizations, as described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3, bringing up to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='53x speedup, working together to achieve better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 Hybrid Deployment of CML and DL This section shows three hybrid deployment cases of CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' As the baselines, without a unified framework, a DL framework is used to implement DL algorithms, while a CML framework is used to implement CML algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work converts CML and DL models into a single ECG, making optimizations and compiling to diverse hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We test the latency of a single query, which is essential in real-world applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 Sentence Sentiment Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The first one is a sentence sentiment classification case, which uses Bert to embed English sentences and logistic regression to make a classification [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use BERT-tiny [3] as the pre-trained Bert model and SST2 [40] as the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The baseline implements BERT-tiny in pytorch- transformers [45] and logistic regression in sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The result is shown in Fig .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work achieves a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='67x speedup on server CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Pytorch-transformers cannot be installed on IoT devices, so the baseline cannot run on Raspberrypi4b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The latency of our work on Raspberrypi4b is 18 milliseconds, which is acceptable in most use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 Radiographic Image Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The second case uses Deep Hybrid Learning [38] to analyze radiographic images, which uses 9 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Table 5: Execution time for batch experiments over all data on CPU (12 cores), GPU, and IoT devices (take Raspberrypi4b as an example) in milliseconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' SK, HB, and Intel is short for scikit-learn, hummingbird, and Intel extension for sklearn, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' "-" means unsupported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Algorithm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='CPU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='GPU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='IOT ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='SK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='HB ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Intel ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Our ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='HB ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Our ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='SK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Our ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Binarizer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='97 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='634 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='126 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Normalizer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='33 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='241 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='168 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='MinMaxScaler ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='13 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='21 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='199 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='148 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='RobustScaler ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='28 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='32 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='343 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='156 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='LinearRegression ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='61 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='116 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='LogisticRegression ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='104 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='137 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='86 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1889 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='952 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='SGDClassifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='94 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='98 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='139 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='88 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1886 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='969 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='DecisionTreeClassifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='33 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='48 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='23 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 99 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='DecisionTreeRegressor ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 211 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='RandomForestClassifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2130 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='885 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2003 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='601 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='20 5820 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ExtraTreeClassifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='29 26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='16 6 206 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ExtraTreesClassifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='10022 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2522 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='9421 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2256 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='99 47959 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='LinearSVC ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='92 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='122 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='152 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='77 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1896 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='930 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='LinearSVR ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='39 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='34 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='323 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='112 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='Table 6: Latency for query experiments over one single record on CPU (12 cores),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' GPU,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' and IoT devices (take Raspberrypi4b as an example) in milliseconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The symbols are the same as Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Algorithm CPU GPU IOT SK HB Intel Our HB Our SK Our Binarizer 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='59 Normalizer 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='41 MinMaxScaler 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='37 RobustScaler 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='37 LinearRegression 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='69 LogisticRegression 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='67 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='59 SGDClassifier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='23 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='65 DecisionTreeClassifier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='36 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='9 DecisionTreeRegressor 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='38 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='88 RandomForestClassifier 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='96 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='6 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='61 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='56 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='05 ExtraTreeClassifier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='47 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='81 ExtraTreesClassifier 205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='27 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='74 204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='73 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='41 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11 LinearSVC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='19 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='65 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='07 LinearSVR 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='91 ��� ������������� � � �� �� ������������ �������� �������� (a) Bert+LogisticRegression for sentence sentiment classification ��� ������������� � �� �� �� ������������ �������� �������� (b) SimpleDNN+RandomForest for radiographic image analysis ��� ������������� � � �� �� ������������ �������� �������� (c) GBDT+Wide&Deep for click through prediction Figure 8: The latency of a single query for CML and DL mixed pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' All three baselines cannot run on IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 10 CMLCompiler: A Unified Compiler for Classical Machine Learning simple DNN to make feature engineering and CML models such as random forests to make a classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use CheXpert [21] as the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The baseline implements DNN in PyTorch and random forest in sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The result is shown in Fig .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our work achieves a 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3x speedup on server CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The pre-trained random forest cannot run on IoT devices, while our work solves this problem through cross-compilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='3 Click Through Rate Prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The third case is click-through rate prediction used in recommendation systems of our anonymous industry partners, using GBDT [15] to extract features and the Wide and Deep [9] models to make prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We use avazu 1 as the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The baseline implements GBDT in sklearn and Wide and Deep in PyTorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The result is shown in Fig .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='8c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We achieve 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='04x speedup on the server CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The GBDT model in the baseline cannot be executed on IoT devices, while our latency on IoT devices is only 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='06 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 6 RELATED WORK CML frameworks and libraries can be divided into three categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (1) General-purpose solution uses one framework to support various models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Scikit-learn [32] is the most widely used CML framework on GitHub [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Spark MLlib [29] is an extension to Spark [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' H2O [17] uses MapReduce [11] to support both CML and DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' There are many other works, such as Shogun [41] and RapidMiner [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These frameworks only support CPU, suffering from severe perfor- mance and portability issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (2) Specific-purpose solution focuses on one type of model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LibLinear [14] supports logistic regression and linear SVM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' LibSVM [5] focuses on SVMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These works are limited to CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Some other works attempt to support various hard- ware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' XGBoost [6] implements gradient boosting decision tree algorithm on CPUs and GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Muhsen Owaida et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [31] bring XGBoost to FPGAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Toby Sharp [39] implements decision trees and forests on GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' These frameworks only support a narrowed vari- ety of models and solve the problem of portability to a certain extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' (3) Extension based on DL attempts to utilize DL frameworks to support CML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' TF-DF [43] is a decision forest library based on TensorFlow but is limited to CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' It’s implemented in an ad-hoc way, losing the portability of DL frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Hummingbird [30] is a general-purpose solution based on PyTorch, adding support for GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' They utilize those abstractions in DL frameworks di- rectly without digging into the features of CML, missing many optimization chances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 7 CONCLUSION This paper presented the design and implementation of CMLCom- piler, a unified compiler for classical Machine Learning (CML) in- ference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CMLCompiler proposed two unified abstractions: oper- ator representations and extended computational graphs (ECGs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Operator representations convert CML operators into tensor for- mats, while an ECG organizes these converted operators in an optimization-friendly way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The CMLCompiler framework performs the conversion and graph optimization based on two unified ab- stractions, then outputs an optimized computational graph to deep learning compilers or frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CMLCompiler also enables the 1https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='kaggle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='com/c/avazu-ctr-prediction hybrid deployment of CML and DL with a unified framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our implementations of CMLCompiler on top of TVM show the effec- tiveness and achieve up to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='38x speedup on CPU, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='31x speedup on GPU, and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09x speedup on IoT devices, compared to the state- of-the-art solutions — scikit-learn, intel sklearn, and hummingbird.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Our support for CML and DL mixed pipelines achieves up to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='04x speedup compared with cross-framework implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' A PROOF Here we prove that 𝑎𝑟𝑔𝑚𝑎𝑥 in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 3 returns the leaf node finally reaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑁𝑆, 𝑁𝐼, and 𝑁𝐿 refer to the number of samples, internal nodes, and leaf nodes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐼 refers to internal nodes, num- bered in the order of Level Order Traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐿 refers to leaf nodes, numbered in the order of In-Order Traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑋 ∈ {0, 1}𝑁𝑆×𝑁𝐼 is the result after comparison with 𝑊2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Each row 𝑋𝑖 ∈ {0, 1}𝑁𝐼 refers to choices for one sample x, marked as �𝑥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝑊3 ∈ {0, 1}𝑁𝐼 ×𝑁𝐿 can be regarded as a list of column vector { �𝐿1, �𝐿2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', � 𝐿𝑁𝐿}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' �𝐿𝑖 ∈ {0, 1}𝑁𝐼 represents the relationship between leaf node 𝐿𝑖 and all internal nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we should prove that 𝑎𝑟𝑔𝑚𝑎𝑥(�𝑥 · �𝐿1, �𝑥 · �𝐿2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', �𝑥 · � 𝐿𝑁𝐿) returns the leaf x reaches, where 𝑎𝑟𝑔𝑚𝑎𝑥 returns the index of the maximum values among the input tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' It returns the first index if maximum appears more than once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We assume that 𝐿𝑘 is the leaf node x reaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' First we prove that �𝑥 · �𝐿𝑘 is the maximum value in {�𝑥 · �𝐿1, �𝑥 · �𝐿2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', �𝑥 · � 𝐿𝑁𝐿 }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We define the path from root node 𝐼0 to 𝐿𝑘 as the decision path of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐿𝑘 [𝑖] = � 0, choose left in 𝐼𝑖 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑝𝑎𝑡ℎ 1, otherwise 𝑥[𝑖] = � 0, choose left in 𝐼𝑖 1, choose right in 𝐼𝑖 Because x reaches 𝐿𝐾, if x[i] = 1 and 𝐼𝑖 ∈ 𝑑𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑝𝑎𝑡ℎ, then 𝐿𝑘 [𝑖] = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' DP represents decision path, right means choosing right in internal node and left means choosing left in internal node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' �𝑥 · �𝐿𝑘 = ∑︁ 𝑖 𝑥[𝑖] ∗ 𝐿𝑘 [𝑖] = ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 1 ∗ 𝐿𝑘 [𝑖] + ∑︁ 𝑖, 𝑙𝑒𝑓 𝑡 𝑖𝑛 𝐼𝑖 0 ∗ 𝐿𝑘 [𝑖] = ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 1 ∗ 𝐿𝑘 [𝑖] = ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃 1 ∗ 𝐿𝑘 [𝑖] + ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃 1 ∗ 𝐿𝑘 [𝑖] = ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖 ∈𝐷𝑃 1 ∗ 1 + ∑︁ 𝑖, 𝑟𝑖𝑔ℎ𝑡 𝑖𝑛 𝐼𝑖∉𝐷𝑃 1 ∗ 1 = 𝐶𝑜𝑢𝑛𝑡𝑠 𝑜𝑓 1 𝑖𝑛 �𝑥 �𝑥 and { �𝐿1, �𝐿2,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', � 𝐿𝑁𝐿} are all 0-1 vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Counts of 1 in �𝑥 is the maxi- mum value of {�𝑥 · �𝐿1, �𝑥 · �𝐿2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', �𝑥 · � 𝐿𝑁𝐿 }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Then we prove that 𝑘 is the first index that returns the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' We assume that there exists a leaf node 𝐿𝑡 ahead of 𝐿𝐾 which meets the condition �𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Now that 𝐿𝑡 is ahead of 𝐿𝑘 and the leaf nodes are numbered in a In-Order Traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ∃ an internal node 𝐼𝑖 where 𝐿𝑡 is in the left subtree of 𝐼𝑖 and 𝐿𝑘 in the right subtree of 𝐼𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' X passes by 𝐼𝑖 and reaches 𝐿𝐾 in its right subtree, so x[i] = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 𝐿𝑡 is in the left subtree of 𝑇𝑖, so 𝐿𝑡 [𝑖]= 0, where x[i] is multiplied 11 Xu Wen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' by zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So �𝑥 · �𝐿𝑡 < 𝑚𝑎𝑥𝑖𝑚𝑢𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Conflict with the assumption that �𝑥 · �𝐿𝑡 == 𝑚𝑎𝑥𝑖𝑚𝑢𝑚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' So 𝐿𝑘 is the first index that returns maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' REFERENCES [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Man- junath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Tensorflow: A system for large-scale machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI’16, page 265–283, USA, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' USENIX Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [2] Amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The total cost of ownership (tco) of ama- zon sagemaker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' https://pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='awscloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='com/rs/112-TZM- 766/images/Amazon_SageMaker_TCO_uf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='pdf, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [3] Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Generalization in nli: Ways (not) to go beyond simple heuristics, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [4] Leo Breiman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Random forests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Machine learning, 45(1):5–32, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [5] Chih-Chung Chang and Chih-Jen Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Libsvm: a library for support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' ACM transactions on intelligent systems and technology (TIST), 2(3):1– 27, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [6] Tianqi Chen and Carlos Guestrin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Xgboost: A scalable tree boosting system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785–794, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [7] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Tvm: An automated end-to-end optimizing compiler for deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation, OSDI’18, page 579–594, USA, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' USENIX Association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [8] Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Learning to optimize tensor programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [9] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Wide & deep learning for recommender systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 1st workshop on deep learning for recommender systems, pages 7–10, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [10] Scott Cyphers, Arjun K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew Brookhart, Avijit Chakraborty, William Constable, Christian Convey, Leona Cook, Omar Kanawi, Robert Kimball, Jason Knight, Nikolay Korovaiko, Varun Kumar Vijay, Yixing Lao, Christopher R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Lishka, Jaikrishnan Menon, Jennifer Myers, Sandeep Aswath Narayana, Adam Procter, and Tristan J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Webb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Intel ngraph: An intermediate representation, compiler, and executor for deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CoRR, abs/1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='08058, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [11] Jeffrey Dean and Sanjay Ghemawat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Mapreduce: simplified data processing on large clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Communications of the ACM, 51(1):107–113, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [12] Dheeru Dua and Casey Graff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' UCI machine learning repository, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [13] EasonLiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Cudatree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='com/EasonLiao/CudaTree, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [14] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Liblinear: A library for large linear classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' the Journal of machine Learning research, 9:1871–1874, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [15] Jerome H Friedman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Greedy function approximation: a gradient boosting machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Annals of statistics, pages 1189–1232, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Advances in neural information processing systems, 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [17] H2O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' H2o: Scalable machine learning platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='com/h2oai/h2o-3, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [18] Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, James Law, Kevin Lee, Jason Lu, Pieter Noordhuis, Misha Smelyanskiy, Liang Xiong, and Xiaodong Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Applied machine learning at facebook: A datacenter infras- tructure perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 620–629, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [19] Markus Hofmann and Ralf Klinkenberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' RapidMiner: Data mining use cases and business analytics applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CRC Press, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [20] Intel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Intel® extension for scikit-learn*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' https://intel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='io/scikit-learn- intelex/, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [21] Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 590–597, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [22] Zhihao Jia, Oded Padon, James Thomas, Todd Warszawski, Matei Zaharia, and Alex Aiken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Taso: optimizing deep learning computation with automatic gen- eration of graph substitutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 27th ACM Symposium on Operating Systems Principles, pages 47–62, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [23] Chris Lattner, Mehdi Amini, Uday Bondhugula, Albert Cohen, Andy Davis, Jacques Pienaar, River Riddle, Tatiana Shpeisman, Nicolas Vasilache, and Olek- sandr Zinenko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Mlir: A compiler infrastructure for the end of moore’s law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='11054, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [24] Zewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' A survey of convolutional neural networks: analysis, applications, and prospects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' IEEE Transactions on Neural Networks and Learning Systems, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [25] Xiaoliang Ling, Weiwei Deng, Chen Gu, Hucheng Zhou, Cui Li, and Feng Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Model ensemble for click prediction in bing search ads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 26th international conference on world wide web companion, pages 689–698, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [26] Wei-Yin Loh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Classification and regression trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Wiley interdisciplinary reviews: data mining and knowledge discovery, 1(1):14–23, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [27] Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, and Bing Xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Universal text representation from bert: An empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='07973, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [28] Larry Medsker and Lakhmi C Jain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Recurrent neural networks: design and applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CRC press, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [29] Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkatara- man, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen, Doris Xin, Reynold Xin, Michael J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Franklin, Reza Zadeh, Matei Zaharia, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Mllib: Machine learning in apache spark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 17(1):1235–1241, jan 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [30] Supun Nakandala, Karla Saur, Gyeong-In Yu, Konstantinos Karanasos, Carlo Curino, Markus Weimer, and Matteo Interlandi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' A tensor compiler for unified machine learning prediction serving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In 14th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 20), pages 899–917, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [31] Muhsen Owaida, Hantian Zhang, Ce Zhang, and Gustavo Alonso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Scalable inference of decision tree ensembles: Flexible design for cpu-fpga platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), pages 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' IEEE, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [32] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Courna- peau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Scikit-learn: Machine learning in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=', 12(null):2825–2830, nov 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [33] Fotis Psallidas, Yiwen Zhu, Bojan Karlas, Matteo Interlandi, Avrilia Floratou, Konstantinos Karanasos, Wentao Wu, Ce Zhang, Subru Krishnan, Carlo Curino, and Markus Weimer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Data science through the looking glass and what we found there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' CoRR, abs/1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='09536, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [34] Susmita Ray.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' A quick review of machine learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In 2019 Inter- national conference on machine learning, big data, cloud and parallel computing (COMITCon), pages 35–39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [35] James Reed, Zachary DeVito, Horace He, Ansley Ussery, and Jason Ansel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' torch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' fx: Practical program capture and transformation for deep learning in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Proceedings of Machine Learning and Systems, 4:638–651, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [36] Nils Reimers and Iryna Gurevych.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Sentence-bert: Sentence embeddings using siamese bert-networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' arXiv preprint arXiv:1908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='10084, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [37] Shayle R Searle and Marvin HJ Gruber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Linear models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' John Wiley & Sons, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [38] Duhita Sengupta, Sk Nishan Ali, Aditya Bhattacharya, Joy Mustafi, Asima Mukhopadhyay, and Kaushik Sengupta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Nuclear morphology optimized deep hybrid learning (numodril): A novel architecture for accurate diagnosis/prognosis of ovarian cancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' bioRxiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [39] Toby Sharp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Implementing decision trees and forests on a gpu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In European conference on computer vision, pages 595–608.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Springer, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [40] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher Manning, Andrew Ng, and Christopher Potts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Parsing With Compositional Vector Gram- mars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In EMNLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [41] Sören Sonnenburg, Gunnar Rätsch, Sebastian Henschel, Christian Widmer, Jonas Behr, Alexander Zien, Fabio de Bona, Alexander Binder, Christian Gehl, and Vojtěch Franc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The shogun machine learning toolbox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' The Journal of Machine Learning Research, 11:1799–1802, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [42] Shan Suthaharan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Support vector machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Machine learning models and algorithms for big data classification, pages 207–235.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [43] TensorFlow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Tensorflow decision forests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='tensorflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='org/decision_forests, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [44] Jake VanderPlas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Python data science handbook: Essential tools for working with data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' " O’Reilly Media, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content='", 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [45] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- langue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Rush.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Transformers: State-of-the-art natural language pro- cessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online, October 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [46] Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat Dukhan, Kim Hazelwood, Eldad Isaac, Yangqing Jia, Bill Jia, Tommer Leyvand, 12 CMLCompiler: A Unified Compiler for Classical Machine Learning Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, Andrew Tulloch, Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian, Sungjoo Yoo, and Peizhao Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Machine learning at facebook: Understanding inference at the edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 331–344, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [47] Doris Xin, Hui Miao, Aditya Parameswaran, and Neoklis Polyzotis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Production machine learning pipelines: Empirical analysis and optimization opportunities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 2021 International Conference on Management of Data, pages 2639–2652, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [48] Matei Zaharia, Mosharaf Chowdhury, Michael J Franklin, Scott Shenker, and Ion Stoica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Spark: cluster computing with working sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' In Proceedings of the 2nd USENIX conference on Hot topics in cloud computing, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' [49] Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Gonzalez, and Ion Stoica.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' Ansor: Generating High-Performance Tensor Programs for Deep Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' USENIX Association, USA, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} +page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tFQT4oBgHgl3EQf7DaV/content/2301.13441v1.pdf'} diff --git a/.gitattributes b/.gitattributes index 043cbfc67d050d9794c5aec7d2d0a6c0b049ebda..0244ca40207898914b6fc5da2acf3fe0481ed0df 100644 --- a/.gitattributes +++ b/.gitattributes @@ -4560,3 +4560,57 @@ X9FPT4oBgHgl3EQfszXP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex ZNE3T4oBgHgl3EQfcgp3/content/2301.04526v1.pdf filter=lfs diff=lfs merge=lfs -text ytFKT4oBgHgl3EQfMC3E/content/2301.11749v1.pdf filter=lfs diff=lfs merge=lfs -text JdE4T4oBgHgl3EQfhg2P/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +T9E5T4oBgHgl3EQfAg7p/content/2301.05380v1.pdf filter=lfs diff=lfs merge=lfs -text +v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf filter=lfs diff=lfs merge=lfs -text +atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf filter=lfs diff=lfs merge=lfs -text +9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf filter=lfs diff=lfs merge=lfs -text +JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf filter=lfs diff=lfs merge=lfs -text +itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf filter=lfs diff=lfs merge=lfs -text +m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf filter=lfs diff=lfs merge=lfs -text +ZNE3T4oBgHgl3EQfcgp3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf filter=lfs diff=lfs merge=lfs -text +ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +PNAzT4oBgHgl3EQfIfvC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf filter=lfs diff=lfs merge=lfs -text +2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf filter=lfs diff=lfs merge=lfs -text +EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf filter=lfs diff=lfs merge=lfs -text +j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf filter=lfs diff=lfs merge=lfs -text +j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +T9E5T4oBgHgl3EQfAg7p/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf filter=lfs diff=lfs merge=lfs -text +XNE3T4oBgHgl3EQfFwkF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf filter=lfs diff=lfs merge=lfs -text +lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf filter=lfs diff=lfs merge=lfs -text +oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +i9FKT4oBgHgl3EQfwC46/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf filter=lfs diff=lfs merge=lfs -text +L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf filter=lfs diff=lfs merge=lfs -text +ZdFJT4oBgHgl3EQf7i0F/content/2301.11678v1.pdf filter=lfs diff=lfs merge=lfs -text +HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf filter=lfs diff=lfs merge=lfs -text +u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf filter=lfs diff=lfs merge=lfs -text +oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf filter=lfs diff=lfs merge=lfs -text +kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ZdFJT4oBgHgl3EQf7i0F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf filter=lfs diff=lfs merge=lfs -text +btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf filter=lfs diff=lfs merge=lfs -text +Z9FRT4oBgHgl3EQfQDdI/content/2301.13520v1.pdf filter=lfs diff=lfs merge=lfs -text +KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text diff --git a/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/2301.02373v1.pdf.txt b/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/2301.02373v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..0182e5572c43eabf33a26327278b4a06504bbbbc --- /dev/null +++ b/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/2301.02373v1.pdf.txt @@ -0,0 +1,1482 @@ +Astronomy & Astrophysics manuscript no. 44705corr +©ESO 2023 +January 9, 2023 +A framework for the architecture of exoplanetary systems +II. Nature versus nurture: Emergent formation pathways of architecture classes +Lokesh Mishra1, 2 , Yann Alibert1 , Stéphane Udry2 +, and Christoph Mordasini1 +1 Institute of Physics, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland +e-mail: exomishra@gmail.com +2 Geneva Observatory, University of Geneva, Chemin Pegasi 51b, 1290 Versoix, Switzerland +Received DD MMM YYYY; accepted DD MMM YYYY +ABSTRACT +In the first paper of this series, we proposed a model-independent framework for characterising the architecture of planetary systems +at the system level. There are four classes of planetary system architecture: similar, mixed, anti-ordered, and ordered. In this paper, +we investigate the formation pathways leading to these four architecture classes. To understand the role of nature versus nurture in +sculpting the final (mass) architecture of a system, we apply our architecture framework to synthetic planetary systems — formed +via core-accretion — using the Bern model. General patterns emerge in the formation pathways of the four architecture classes. +Almost all planetary systems emerging from protoplanetary disks whose initial solid mass was less than one Jupiter mass are similar. +Systems emerging from heavier disks may become mixed, anti-ordered, or ordered. Increasing dynamical interactions (planet–planet, +planet–disk) tends to shift a system’s architecture from mixed to anti-ordered to ordered. Our model predicts the existence of a new +metallicity–architecture correlation. Similar systems have very high occurrence around low-metallicity stars. The occurrence of the +anti-ordered and ordered classes increases with increasing metallicity. The occurrence of mixed architecture first increases and then +decreases with increasing metallicity. In our synthetic planetary systems, the role of nature is disentangled from the role of nurture. +Nature (or initial conditions) pre-determines whether the architecture of a system becomes similar; otherwise nurture influences +whether a system becomes mixed, anti-ordered, or ordered. We propose the ‘Aryabhata formation scenario’ to explain some planetary +systems which host only water-rich worlds. We finish this paper with a discussion of future observational and theoretical works that +may support or refute the results of this paper. +Key words. Planetary systems – Planets and satellites: detection – Planets and satellites: formation – Planets and satellites: physical +evolution +1. Introduction +Studying planetary systems as single units of a physical sys- +tem makes them amenable to system level examinations. Inves- +tigating the ensemble of bound objects (host star(s), planets, mi- +nor bodies) coherently can allow a deeper and more compre- +hensive understanding of exoplanetary astrophysics to emerge. +The purview of this multi-body physics covers a breadth of +topics including stability of planetary systems (Gladman 1993; +Laskar 1997, 2000; Chambers 1999; Fang & Margot 2013; Pu +& Wu 2015; Laskar & Petit 2017; Obertas et al. 2017; Petit +et al. 2018; Wang et al. 2019; Yeh et al. 2020; Tamayo et al. +2020; Turrini et al. 2020), stellar host and protoplanetary disk +properties (Petigura et al. 2018; Manara et al. 2019; Mulders +et al. 2021), novel approaches to system-level characterisation +(Tremaine 2015; Kipping 2018; Alibert 2019; Mishra et al. 2019; +Gilbert & Fabrycky 2020; Bashi & Zucker 2021; Sandford et al. +2021), and the architecture of planetary systems (Lissauer et al. +2011; Ciardi et al. 2013; Fabrycky et al. 2014; Weiss et al. +2018; Millholland et al. 2017; Adams 2019; Adams et al. 2020; +Mulders et al. 2020; He et al. 2019; He et al. 2021; Mishra +et al. 2021; Adibekyan et al. 2021; Millholland & Winn 2021; +Winter et al. 2020). Analysing multi-body system level physics +may allow us to understand whether planetary systems are self- +organizing emergent structures – i.e. whether global level pat- +terns are emerging from local level interactions. +Inspired by the peas in a pod architecture (Weiss et al. 2018; +Millholland et al. 2017; Mishra et al. 2021), we introduced a new +framework for studying the architecture of planetary systems +(Mishra et al. 2023, ; hereafter Paper I). Studying the architecture +as a global-system level phenomena, this framework allows us to +characterise, quantify, and compare the architecture of individual +planetary systems. Four classes of planetary system architecture +emerged from this framework. These classes are labelled simi- +lar, mixed, anti-ordered, and ordered depending on the arrange- +ment and distribution of planets around the host star. The key +idea behind this framework is that the arrangement and distribu- +tion of planets contains additional information that cannot be ex- +tracted by studying single planets individually. Hints of the pres- +ence of this additional information were revealed in some works +(Tremaine 2015; Laskar & Petit 2017; Kipping 2018; Mishra +et al. 2019; Gilbert & Fabrycky 2020; Sandford et al. 2021). +Explaining the formation, evolution, and final assembly of +planetary systems remains an outstanding theoretical problem. +Planet-formation physics spans astronomical orders of magni- +tude in mass, size, and time (Udry & Santos 2007; Armitage +2010). The processes occurring during planet formation convert +gases and micron-sized dust particles from the protoplanetary +disk into different kinds of planets arranged in different architec- +tures over timescales of millions and billions of years. However, +it remains unclear as to how initial conditions derived from the +host star or protoplanetary disk combine with the formation and +Article number, page 1 of 12 +arXiv:2301.02373v1 [astro-ph.EP] 6 Jan 2023 + +A&A proofs: manuscript no. 44705corr +evolution processes to give rise to the observed exoplanetary sys- +tems. +We are interested in understanding the role of nature ver- +sus nurture in sculpting the final planetary system and the extent +to which the character of the mature planetary system is influ- +enced by its initial conditions. Kipping (2018) suggested, using +an entropy-like formulation for planetary systems, that the ini- +tial conditions of planet formation could be inferred based on +their present-day architecture. However, the presence of stochas- +tic processes makes it difficult to connect the initial conditions +with the final system. It is also unclear as to whether or not +stochastic physical processes can erase all memory of initial con- +ditions, or indeed leave their own impressions on the final archi- +tecture. Using ideas from the fields of machine-learning-based +natural language processing, Sandford et al. (2021) showed that +planetary systems are not randomly assembled. While it is clear +that planetary systems are not identical copies of one another, the +quest for quantifying the similarity between planetary systems is +a tantalising one. +In this paper, we investigate the formation pathways that +lead to the four architecture classes. Due to the stochastic na- +ture of this problem, understanding the formation of a single +planetary system can be very complicated. For example, two +systems with almost identical initial conditions may evolve into +two completely different planetary systems. Chaos arising from +multi-body gravitational interactions may cause differing forma- +tion pathways for these two systems. However, some patterns +are found to emerge when studying planetary systems as part of +an ensemble. These trends, as we show in this paper, help us +to understand the role played by initial conditions and physical +processes in shaping the architecture. +Figure 1 (bottom) summarises the main findings of this pa- +per. We show that the effects of planet formation and evolution +processes are imprinted in the system-level architecture. Fig- +ure 1 shows the formation pathways of the architecture classes +that emerge due to the system-level approach of our architecture +framework (Fig. 1 (top)). This sankey diagram has nodes for pro- +toplanetary disk gas mass, protoplanetary disk solid mass, metal- +licity, and planetary architecture. We find that the formation of +similar planetary systems is dominated by initial conditions. If +the initial conditions disfavour the formation of similar archi- +tecture, the other three architectures may emerge. Whether the +final architecture is mixed, ordered, or anti-ordered seems to de- +pend on the stochastic formation processes. Increasing dynam- +ical interactions (disk–planet, planet–planet) generally tends to +produce mixed, anti-ordered, and then ordered architectures, re- +spectively. +We first summarise the architecture framework and some re- +sults from Paper I in Sect. 2. We study the role of nature (initial +conditions) and nurture (dynamical processes) in Sects. 3 and +4, respectively. In these sections, we study the influence of pro- +toplanetary disk mass, metallicity, protoplanetary disk lifetime, +planet–disk interactions, planet–planet interactions, and N-body +interactions on the final architecture of simulated planetary sys- +tems. We summarise our results, suggest possible future studies, +and conclude this paper in Sect. 6. +2. Summary of Paper I and the Bern model +2.1. Architecture framework +The arrangement of multiple planets and the collective distri- +bution of their physical properties around the host star(s) char- +acterises the architecture of a planetary system (Mishra et al. +Distance from star +Quantity (e.g. Mass) +Similar +Anti-Ordered +Ordered +Mixed +Fig. 1. The four classes of planetary system architecture and their emer- +gent formation pathways. +Top: Reproduced from Paper I – Schematic diagram depicting the Four +classes of planetary system architecture: similar, anti-ordered, mixed, +and ordered. Depending on how a quantity (such as mass or size) varies +from one planet to another, the architecture of a system can be identi- +fied. The framework is model independent. +Bottom: Emergence of formation pathways: Sankey diagram depicting +the emergence of formation pathways of architecture classes. The thick- +ness of the links and nodes is proportional to the relative number of +synthetic systems in our simulation. This result is derived from syn- +thetic planetary systems around a solar mass star via the Bern model. +Disk gas mass and metallicity are binned at their median values. +2021). To quantify the architecture of a planetary system, we de- +veloped a novel model-independent framework Paper I. Some +key aspects of this framework are briefly summarised here, and +we refer the reader to Sect. 3 of Paper I for details. +Conceptually, the framework defines four classes of plane- +tary system architecture: similar, mixed, anti-ordered, and or- +Article number, page 2 of 12 + +Protoplanetary Disk +Mgas +< 0.03Mo +disk +Gas +Star Metallicity +[Fe/H] < 0 +[Fe/H] ≥0 +Protoplanetary Disk +≥ +1MJ +Solids +Planetary System +Architecture Class +Ordered +Similar +Ordered +Mixed +AntiL. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes +dered. Consider a planetary quantity (such as mass, radius, etc.) +as a function of the distance of the planet to the host star (see Fig. +1). When all planets in a system have similar values of a plane- +tary quantity, the architecture of such systems is similar. When +the planetary quantity increases with increasing distance, the +system is said to exhibit an ordered architecture. Alternatively, +if the quantity shows an overall decreasing trend with increas- +ing distance, the architecture is considered to be anti-ordered. +Finally, the planetary quantities could also show variations that +are not captured in the three classes above. A mixed architec- +ture may depict large, increasing, and decreasing variations with +distance. By studying the variation of a planetary quantity with +distance for all planets in the system, our framework captures the +arrangement and distribution of planets in the system. +The architecture of a system is quantified via two coeffi- +cients: the coefficient of similarity, CS (qi), and the coefficient of +variation, CV(qi). Here, qi represents a planetary quantity (e.g. +mass, radius, eccentricity, density) for the ith planet. When the +coefficients are calculated using planetary masses, they inform +us about the mass architecture of a system, that is, the arrange- +ment and distribution of mass in a given system. Likewise, we +can study the radius architecture, density architecture, water- +mass-fraction architecture, eccentricity architecture, and so on. +The versatility of our architecture framework lies in its ability +to allow us to study the multifaceted architectures of a planetary +system. In Paper I, we explored the relationship between these +different kinds of architectures. As in Paper I, we identify the +architecture of a system by its bulk mass architecture. +Calibrated on planetary masses, a classification scheme to +identify the architecture class was proposed in Paper I (eq. 8). +The CS versus CV plane represents the architecture space for +planetary systems (Fig. 3 in Paper I). This new parameter space +was found to be endowed with a curious mathematical property, +namely planetary systems cannot occupy all parts of the archi- +tecture plane, as some regions of this parameter space are math- +ematically forbidden. +To understand the implications of this architecture frame- +work, we applied it on several catalogues in Paper I. These +included 41 observed multi-planetary systems and numerically +simulated systems via population synthesis using the Generation +III Bern model (Emsenhuber et al. 2021a,b). +2.2. Bern model +For the synthetic planetary systems, as the initial conditions and +the physical processes are known, it is possible (and desirable) to +understand how different architecture classes are formed. As this +paper is dedicated to planet formation and its imprints on archi- +tecture, we briefly review the ingredients of the Bern model here. +Readers interested in further details of this model are referred to +the recent NGPPS series of papers (Emsenhuber et al. 2021a,b; +Schlecker et al. 2021a; Burn et al. 2021; Schlecker et al. 2021b; +Mishra et al. 2021). The historic development of the Bern model +may be traced through the works of Alibert et al. (2004, 2005); +Mordasini et al. (2009); Alibert et al. (2011); Mordasini et al. +(2012a,b); Alibert et al. (2013); Fortier et al. (2013); Marboeuf +et al. (2014b); Thiabaud et al. (2014); Dittkrist et al. (2014); Jin +et al. (2014) and is reviewed in Benz et al. (2014); Mordasini +(2018). +Based on the core-accretion paradigm (Pollack et al. 1996), +the Bern model is a global model of planet formation and evo- +lution. The model studies the growth of several lunar-mass pro- +toplanetary embryos embedded in protoplanetary disks (consist- +ing of a gaseous and solid phase) around a solar-type star. The +disk model is based on viscous angular momentum transport +(Lynden-Bell & Pringle 1974; Veras & Armitage 2004; Hueso +& Guillot 2005). Turbulence is characterised by the Shakura & +Sunyaev (1973) approach. The initial mass of the solid disk de- +pends on the metallicity of the star and also on the condensation +state of the molecules in the disk (Thiabaud et al. 2014). The +solids in the disk are composed of a swarm of rocky and icy +planetesimals. The solids in the disk evolve via (a) accretion by +growing planets, (b) interaction with gaseous disk, (c) dynamical +stirring from planets and other planetesimals, and so on (Fortier +et al. 2013). The 1D geometrically thin disk evolution is studied +up to 1000 au. +This star–disk–embryo numerical system is endowed with +several physical processes, which are occurring simultaneously +and in a self-consistently coupled way. Some of these physical +processes are: stellar evolution (Baraffe et al. 2015), interactions +between viscous protoplanetary disk and star (Lynden-Bell & +Pringle 1974; Shakura & Sunyaev 1973; Clarke et al. 2001; Mat- +suyama et al. 2003; Veras & Armitage 2004; Nakamoto & Nak- +agawa 1994; Hueso & Guillot 2005), condensation of volatile +and/or refractory species (Marboeuf et al. 2014b,a; Thiabaud +et al. 2014), planet formation physics (Alibert et al. 2013; Fortier +et al. 2013; Mordasini et al. 2012b), orbital and tidal migration +(Coleman & Nelson 2014; Paardekooper et al. 2011; Dittkrist +et al. 2014), gravitational N-body interactions (Chambers 1999; +Alibert et al. 2013; Emsenhuber et al. 2021a,b), atmospheric es- +cape (Jin et al. 2014), bloating (Sarkis et al. 2021), and so on +(see Fig. 1 in Mishra et al. (2019) for a schematic diagram). In +addition, the model also calculates the internal structure of all +planets, assuming them all to be spherically symmetric. +In the synthetic planetary population we use in the present +work, some initial conditions are fixed, namely we use a 1M⊙ +mass star and a disk viscosity α = 2 × 10−3, describing the ini- +tial shape of the gas and planetesimal disks via power laws (Ve- +ras & Armitage 2004), with a planetesimal size of 300m, and a +fixed density (rocky 3.2 g cm−3, icy 1 g cm−3). We add 100 pro- +toplanetary embryos to the protoplanetary disk. We ensure that +no two embryos start within 10 hill radii of each other (Kokubo +& Ida 1998, 2002). This model is then run 1000 times while +varying other initial conditions. We varied the initial gas mass in +the protoplanetary disk, disk lifetime, stellar metallicity, disk in- +ner edge, and the initial location of the protoplanetary embryos +(for details see Emsenhuber et al. 2021b). +The Bern model includes a significant variety of physics +and uses plausible choices of initial conditions, which are mo- +tivated by observations. However, it is only a simplified low- +dimensional approximation of our current understanding of +planet formation. For example, we model planet formation via +core-accretion only and ignore other methods, such as disk insta- +bility (Schib et al. 2021). Among others, we also assume that the +dust-to-gas ratio is the same for both the host star and the disk, +and that all dust in the disk is aggregated into planetesimals. The +N-body interactions are tracked for only 20 Myr, which may be +inadequate to capture dynamical effects occurring in the outer +parts of the system. The assumptions, choices, and simplifica- +tions made in this model may have a strong impact on the out- +come of this paper. Nevertheless, exploring the implications of +our architecture framework using synthetic populations via the +Bern model is a necessary first step. The main result of this pa- +per is not in understanding the formation of any single plane- +tary system but to show that, for different architecture classes, +discernible patterns of formation pathways emerge. Future stud- +ies could apply our architecture framework (from Paper I) with +other planet formation models. If the formation pathways for the +Article number, page 3 of 12 + +A&A proofs: manuscript no. 44705corr +different architecture classes were found to remain the same af- +ter using different formation models, then our results would be +strengthened and become more robust. +3. Nature: Role of star and disk initial conditions +In this section, we study the connection between the initial con- +ditions and the final architecture of a system. We begin by count- +ing the number of different architecture classes that emerge from +our population synthesis as a function of the various initial con- +ditions that are varied. The role of varying disk masses and stel- +lar metallicities is presented in Sect. 3.1, and that of varying disk +lifetimes in Sect. 3.2. For completeness, we measure the relative +count for an architecture class within a bin by dividing the num- +ber of systems of a particular architecture class in a bin by the +total number of systems in that bin. We emphasise that, as in Pa- +per I, the architecture of a system is identified with its bulk mass +architecture. Thus, when we refer to a similar or ordered sys- +tem, we are referring to a system whose bulk mass architecture +is similar or ordered, respectively. +3.1. Protoplanetary disk: Mass and stellar metallicity +Figure 2 (upper left) shows the dependence of the architecture +class relative counts on the initial mass of gas in the protoplan- +etary disk. Over 96% of all disks that started with gas masses +≲ 0.04M⊙ give rise to planetary systems of similar architecture. +About 1% of these low-mass disks lead to each of the other three +architecture classes. The relative count of systems with similar +architecture shows a clear decreasing trend with increasing mass +in the disk gas. +The production of the remaining three architecture classes +tends to increase with increasing disk gas mass, but with dis- +tinct trends. As the mass in the gas disk increases, the relative +count of mixed architectures increases first, and then decreases +for gas mass ≳ 0.12M⊙. The relative count for both anti-ordered +and ordered architectures continues to increase with increasing +disk mass. Anti-ordered architectures become the most common +outcome from large disks with gas mass ≳ 0.12M⊙. +In Fig. 2 (upper right), we see the binned relative count of +different architecture classes as a function of the mass of the +solids in the protoplanetary disk. This plot shows some of the +same features that we saw in Fig. 2 (upper left). About 99% of all +disks that have solid masses ≲ 200M⊕ give rise to similar plan- +etary systems. The production of similar architecture decreases +as the mass of solids in a disk is increased. +Before continuing, we note that this is already a result of +considerable importance. The physical processes encoded in the +Bern model are the same for all 1000 planetary systems. The +only difference between these synthetic systems arises from the +variations in their initial conditions. We are seeing that almost +all low-mass disks give rise to only one architecture, the similar +class. This occurs despite all the physical processes that can act +upon the system and induce some architectural variation. As we +show below, the low mass of the disk limits some of the phys- +ical processes that sculpt a system’s architecture. We conclude +that the production of systems of the similar architecture class is +dominated by initial conditions. +Close to 60% of all observed systems in our multi-planetary +systems catalogue (from Paper I) are similar in their mass ar- +chitecture (Paper I). For some of these similar class systems +(like Trappist-1, TOI-178, etc), if their formation is via core- +accretion, our work may suggest strong limits on the initial mass +of their protoplanetary disks. +The relative count of the other three architecture classes in- +creases as the solid mass in the disk increases. The production +of mixed architectures peaks around disks of ≈ 1MJ and then +decreases. The prevalence of anti-ordered and ordered architec- +tures continues to increase with increasing disk mass. For heavy +massive disks, anti-ordered architecture is the most common out- +come. +Figure 2 (middle left) shows the relative count of each ar- +chitecture class in the synthetic population as a function of stel- +lar metallicity. Figure 2 (middle right) shows the same for the +41 observed multi-planetary systems. The selection criterion for +our observed catalogue is detailed in Paper I. We find an inter- +esting correlation between the metallicity and the architecture +of a system, hereafter referred to as the metallicity–architecture +correlation, and note the following trends. Over 98% of all sys- +tems with Fe/H < −0.2 are of similar type. The relative count +of similar architecture decreases as the metallicity is increased. +The relative counts of the other three architecture classes are be- +low 5% for metallicities ≤ −0.2. At different rates, the relative +counts of mixed, ordered, and anti-ordered classes increase with +increasing metallicity. Our catalogue of observed planetary sys- +tems shows an encouragingly similar trend. +Our observations catalogue suffers from detection biases and +incompleteness. One way in which these limitations manifest +is that we do not find any observed example of anti-ordered +architecture. The qualitative trend for the relative count of ob- +served system architectures as a function of their stellar metal- +licity agrees with our synthetic systems. For example, the rela- +tive count of similar observed systems decreases with increasing +metallicity. The relative count of ordered architectures continues +to increase with increasing metallicity. +To understand the origin of these correlations, we study the +relation between initial disk mass (both in solids and gases), stel- +lar metallicity, and the final architecture of the systems in our +model. In the Bern model, the initial solid mass of the disk is a +fraction of the initial gas mass of the disk. This fraction is cor- +related with the dust-to-gas ratio, which also depends on the gas +mass itself because the location of different icelines depend on it. +By simulating systems with varying dust-to-gas ratio (fD/G), we +simulate systems around stars with different metallicities. This +is due to the following relation: +10[Fe/H] = +fD/G +fD/G,⊙ +, +fD/G,⊙ = 0.0149 (Lodders 2003). +(1) +The metallicities in our simulations vary from −0.6 to 0.5 fol- +lowing Santos et al. (2005). +Figure 2 shows the solid disk mass as a function of the gas +disk mass (bottom left) and the total mass in the planets as a +function of the solid disk mass (bottom right). Each point rep- +resents one planetary system, and the shape and colour of the +marker shows its final architecture. These two plots help us un- +derstand the correlations discussed above. +The bottom left panel of Fig. 2 shows the relationship be- +tween gas disk mass, solid disk mass, metallicity, and the final ar- +chitecture of the system. Generally, when the mass of the solids +in a disk is ≳ 1MJ(≈ 318M⊕), the production of architectures +other than similar is triggered. We note that up to a certain gas +disk mass (≲ 0.02M⊙), irrespective of the metallicity, all disks +lead to similar architecture. For heavier gas disks (≳ 0.02M⊙), +metallicities begin to play a role. If the gas disk mass is high +enough, even low metallicities (≈ −0.2) can trigger the produc- +tion of architectures other than the similar class. However, for +lower gas disk masses, higher metallicities are required to pro- +duce about a 1MJ mass in the solid disk. +Article number, page 4 of 12 + +L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes +0.00 +0.04 +0.08 +0.12 +0.16 +Protoplanetary Disk: Gas Mass [M +] +0 +20 +40 +60 +80 +100 +Relative count of planetary systems [%] +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +0 +200 +400 +600 +800 +1000 +Protoplanetary Disk: Solid Mass [M +] +0 +20 +40 +60 +80 +100 +Relative count of planetary systems [%] +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +0.6 +0.4 +0.2 +0.0 +0.2 +0.4 +0.6 +Metallicity [Fe/H] +0 +20 +40 +60 +80 +100 +Relative count of planetary systems [%] +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +0.6 +0.4 +0.2 +0.0 +0.2 +0.4 +Metallicity [Fe/H] +0 +20 +40 +60 +80 +100 +Relative count of planetary systems [%] +Observations +Similar +Anti-Ordered +Mixed +Ordered +10 +2 +10 +1 +Protoplanetary Disk: Gas Mass [M +] +10 +1 +10 +2 +10 +3 +Protoplanetary Disk: Solid Mass [M +] +1MJ +318M +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +[Fe/H] = 0.5 +[Fe/H] = -0.6 +10 +1 +10 +2 +10 +3 +Protoplanetary Disk: Solid Mass [M +] +10 +0 +10 +1 +10 +2 +10 +3 +10 +4 +Total Mass in Planets [M +] +10 % +100 % +1MJ +318M +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +Efficiency of +solid accretion[%] +Fig. 2. Role of disk mass and the metallicity–architecture correlation. The top two rows show the binned relative count of each architecture class +as a function of initial disk gas mass (upper left), disk solid mass (upper right), stellar metallicity in the synthetic population (middle left), and +stellar metallicity in observed systems (middle right). The length of the error bars corresponds to the total number of systems in each bin as: +100/ +√ +bin counts. In the bottom panels, each point corresponds to a single planetary system. The system architecture is indicated by the colour +and shape of the marker. The bottom left panel shows the solid mass in the disk as a function of the disk gas mass. The two diagonal lines convey +the role of stellar metallicity. The dashed horizontal line indicates the mass of Jupiter. The bottom right panel shows the total mass in planets as a +function of the solid mass in the protoplanetary disk. The two diagonal lines indicate the efficiency of converting solids from the disk into planets. +If the planets in a hypothetical system could accrete all the solid mass of its disk, and these planets had no gaseous atmosphere, then such a system +would lie on the diagonal line corresponding to 100% accretion efficiency. The dashed vertical line indicates the mass of Jupiter. +Article number, page 5 of 12 + +A&A proofs: manuscript no. 44705corr +It is clear that the mass in the solids of the protoplanetary +disk plays an essential role here. The bottom right panel of Fig. +2 explains the above statement. The total mass in the planets +increases as the mass of solids in the disk increases. When the +mass of solids in the disk is ∼ 1MJ, the distribution of total mass +in planets shows a jump. This is because massive planets can be- +gin to accrete significant amounts of gas. For the core-accretion +scenario, this plot suggests that similar architectures occur for +low-mass disks because they cannot produce massive giant plan- +ets. Gas giants are very effective in inducing dynamical stirring, +which are in turn responsible for shaping the system architecture. +This signifies the role played by physical processes in producing +the mixed, anti-ordered, and ordered architectures1. +3.2. Lifetime of the protoplanetary disk +In this section, we explore the role of disk lifetime (i.e. the age +of a protoplanetary disk) in defining the final architecture class +of a system. The lifetime of a disk, in the Bern model, is influ- +enced by the external disk photo-evaporation rate (see Emsenhu- +ber et al. (2021a) for details) and the mass of the disk. +Figure 3 (left) shows the binned relative count of system ar- +chitecture as a function of disk lifetime. About 80% of all disks +with lifetimes ranging from 1 to 5 Myr produce systems of the +similar architecture class. The relative count of similar systems +decreases as disk lifetime increases. The relative count of mixed +architecture does not show any significant variation with disk +lifetime. The relative counts of anti-ordered and ordered archi- +tectures vary as the disk lifetime increases. This suggests that +the physical mechanisms by which disks shape the final archi- +tectures of systems play a role in shaping similar, anti-ordered, +and ordered architectures. +The trends of the relative counts of architecture classes with +disk lifetime are similar to the distribution of relative counts as +functions of disk mass. We would like to understand whether +system architecture is influenced by disk lifetime directly or via +an inherent dependence of disk lifetime on disk mass. The right +panel of Fig. 3 shows the gas disk mass as a function of disk +lifetime. The scatter plot depicting each individual disk shows +that, generally, low-mass disks have short lifetimes. The solid +lines depict the average gas mass for each architecture class for +each disk lifetime bin. +The gas mass of the disks that go on to form systems of +mixed, anti-ordered, or ordered architecture shows a weak de- +pendence on disk lifetime. On average, the more massive disks +seem to last longer. For disks that give rise to the similar archi- +tecture class, this trend is clearly visible. If more massive disks +also live longer, this partly explains the relative count distribu- +tion seen in Fig. 3 (left). +However, disks also affect the planetary architecture in other +interesting ways, namely orbital migration and eccentricity, and +inclination damping. We study the effect of these planet–disk +interactions in shaping system architecture in Sect. 4.1. +1 The architecture framework is not sensitive to the absolute value of +a planetary quantity, such as mass, but only the ratio of the quantities +for adjacent planets. Independent of the architecture framework, we will +present another system-level framework analysing the state of a plane- +tary system. This other classification framework is sensitive to the abso- +lute mass of a planet and will address the role of giant planets on system- +level properties. The state classification framework reveals a drastic dif- +ference between systems with and without giant planets (Mishra et al. +in prep.). +4. Nurture: Role of dynamical stirring +Whether or not the final architecture of a planetary system is +pre-determined by its initial conditions from the host star and +the protoplanetary disk remains unclear. If not, the mechanism +by which dynamical processes shape the architecture of a plane- +tary system remains to be determined. It also remains unclear as +to whether or not dynamical processes remove all traces of ini- +tial conditions from the final system, or whether these stochastic +processes leave their impressions on the final architecture. In this +section, we try to answer these questions. We focus our attention +on dynamical interactions between planets and the protoplane- +tary disk, and the gravitational multi-body interactions amongst +planets themselves. +While there exist several dynamical mechanisms that shape +the final architecture, we simplify the task before us by concen- +trating on violent dynamical instabilities that change a planetary +system in a non-trivial manner. For each synthetic planetary sys- +tem, we count the number of planet–planet mergers, planetary +ejections, and planets falling into their host star. We use these +counts as a proxy to assess the strength of dynamical interactions +that occur in a system. In the subsequent subsections, we study +planet–disk interactions and planet–planet interactions (mergers, +ejections, stellar accretion). These dynamical effects give rise to +stochasticity and are thereby inherently unpredictable. However, +we hope that the underlying dynamical processes that are sculpt- +ing the system architecture emerge as patterns in the counts of +these violent events. +4.1. Planet–disk interactions +Protoplanetary disks interact with planets via several mecha- +nisms. Planets may experience orbital migration via gravitation +interactions with the disk. Low-mass planets undergo type I mi- +gration, which in the Bern model is implemented following the +approaches of Coleman & Nelson (2014); Paardekooper et al. +(2011). Massive planets may open a gap in the disk and undergo +type II migration (Dittkrist et al. 2014). The disk also dampens +the eccentricity and inclination of planets, which is coherently +applied within the N-body integrator. Readers interested in the +details of the implementation are referred to Emsenhuber et al. +(2021a,b). +Figure 4 (left) shows the count of mergers and ejections for +each planetary system in our synthetic population as a function +of the lifetime of its protoplanetary disk. For an easier visualisa- +tion of any underlying trend, we also show the average merger +and ejection counts for each disk lifetime bin. The number of +planet–planet mergers shows a clear correlation with disk life- +time. Disks that live longer usually give rise to planetary sys- +tems that undergo more mergers than short-lived disks. We refer +to this correlation as ‘migration assisted mergers’. One possible +explanation for this correlation could be that disks allow plan- +ets to migrate depending on their mass 2. Two adjacent planets +that are not migrating at the same rate, perhaps owing to their +different masses, can come close enough for a merger to occur. +The number of ejections does not show any clear trend with disk +lifetime. Disks dampen a planet’s eccentricity and inclination. +As ejection requires extremely violent interactions (marked by +2 There could be other scenarios which contribute to the ‘migration as- +sisted mergers’ correlation. For example, migration may allow planets +to become more massive by accreting more material due to increased +access to planetesimals (Alibert et al. 2005). Massive planets may inter- +act more amongst themselves, leading to more mergers. +Article number, page 6 of 12 + +L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes +1.0 +1.8 +3.2 +5.6 +10.0 +17.8 +Protoplanetary Disk: Lifetime [Myr] +0 +20 +40 +60 +80 +100 +Relative count of planetary systems [%] +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +1.0 +1.8 +3.2 +5.6 +10.0 +17.8 +Disk Lifetime [Myr] +10 +2 +10 +1 +Protoplanetary Disk: Gas Mass [M +] +Bern Model +Similar +Anti-Ordered +Mixed +Ordered +Fig. 3. Role of disk lifetime on system architecture. Left: Binned relative counts of architecture classes as a function of disk lifetime. The length of +error bars corresponds to the total number of systems in each bin, as: 100/ +√ +bin counts. Right: Scatter plot shows the disk gas mass as a function +of disk lifetime. The solid lines show the binned average gas disk mass for each architecture class. +1.0 +1.6 +2.6 +4.2 +6.8 +11.0 +17.8 +Disk Lifetime [Myr] +0 +20 +40 +60 +80 +100 +Counts +Bern Model +Mergers +Ejections +0 +20 +40 +60 +80 +100 +Planet counts +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +Density +Bern Model +Ejections: w/ planet-disk interactions + Mergers: w/ planet-disk interactions +Ejections: w/o planet-disk interactions + Mergers: w/o planet-disk interactions +Fig. 4. Effect of planet–disk interactions on architecture. Left: Scatter plot shows the number of planet–planet mergers and planetary ejections that +occurred in systems as a function of disk lifetime. The solid lines show the average counts for each disk lifetime bin. Right: Distribution of the +total number of mergers (dashed) and ejections (solid) for the entire synthetic population. The black line depicts the nominal synthetic population, +and the red line depicts a different synthetic population in which the disk-)planet interactions were artificially switched off. +high eccentricities and inclinations), disks may essentially in- +hibit planetary ejections. +To test these ideas, we simulated another population of 1000 +planetary systems. In this population (NG140), planet–disk in- +teractions (gas-driven migrations, and eccentricity and inclina- +tion damping) are artificially switched off. For all such systems, +we count the number of mergers and ejections and compare them +with our nominal population. Figure 4 (right) shows the distribu- +tion of the number of planet–planet mergers and planetary ejec- +tions in the two populations. +As expected, the number of planet–planet mergers decreases +(distribution shifts to the left) when planet–disk interactions are +switched off. This confirms the migration-assisted mergers cor- +relation presented above. The distribution of ejections, on the +other hand, increases significantly when planet–disk interactions +are switched off. When the damping of the planetary eccentricity +and inclination by the disk is switched off, the gravitational in- +teractions between planets increases, such that many planets are +ejected. +We make two observations from the results presented so +far. First, counts of mergers and ejections seem to be a good +proxy for the prevalence of dynamical interactions, as they cap- +ture some of the well-established dynamical effects concern- +ing planet–disk interactions. Second, we observe that disks af- +Article number, page 7 of 12 + +A&A proofs: manuscript no. 44705corr +fect system architecture in a multitude of ways. While disk +mass shows a direct relation to final architecture, disks also af- +fect system architecture indirectly by influencing the dynami- +cal interactions that occur therein. Long-living disks give rise to +more mergers and inhibit planetary ejections. Conversely, sys- +tems emerging from short-lived disks experience fewer mergers. +4.2. Planet–planet interactions +Above, we show that planet–disk interactions in the Bern model +may influence the dynamical interactions occurring in a system. +Now, in this section, we are interested in understanding how +these violent events shape the final architecture of a system. +Planets interact with each other gravitationally. These multi- +body interactions are tracked via a N-body integrator in the Bern +model. The end result of some of the more violent interactions +is that planets are lost via one of several channels: planet–planet +mergers3, planetary ejections, accretion by the host star, and so +on. These channels allow a planetary system to fundamentally +alter itself and its architecture. +Figure 5 shows, for each architecture class, the distribution +of planet–planet mergers and the number of planets lost via ejec- +tions and stellar accretion. At first glance, losing planets to the +host star may not seem appropriate for planet–planet interac- +tions. However, many of these planets meet their fate, in the +Bern model, when they are pushed inwards after being captured +in mean-motion resonances with other planets4. Therefore, this +channel of losing planets is included here. We caution the reader +that the absolute number of planets lost via any channel is model- +dependent. The quantity of interest here is the relative difference +between the different architecture classes. +Figure 5 suggests that the similar architecture class is almost +completely shaped by planet–planet mergers. Most similar sys- +tems in our simulations have between 40 and 80 mergers taking +place within them, and the median number of mergers is 63. Vi- +olent dynamical interactions that lead to the ejection of planets +seems to be very rare in this architecture type, as 100% of all +similar systems lose less than five planets via planetary ejection +(median ejections is 0). Likewise, similar systems seem to not +rely on the stellar accretion channel for losing planets (median +stellar accretions is 0). +Systems with mixed architecture also undergo many planet– +planet mergers. The number of mergers in mixed systems ranges +from 50 to 85, and the median of mergers is 70. In a clear con- +trast from similar architectures, the ejection and stellar accretion +channels play an important role for mixed systems. The median +number of planets lost via ejections is 7, and via stellar accre- +tions is 2. +anti-ordered systems utilise all three dynamical channels. +The distribution of mergers in anti-ordered systems is roughly +similar to that of mixed systems. The range is between 50 and 85 +and the median number of mergers is 67. However, anti-ordered +systems tend to lose more planets via the ejection channel. The +number of planets lost via dynamical ejection ranges from 0 to +35 with a median value of 14.5. Compared to mixed systems, +3 In our model, when the distance between two planets becomes +smaller than the sum of their radii, a planet–planet collision is said to oc- +cur. We treat such merger events in a simplified manner: the cores of the +target–impactor pair are merged, the lesser massive body loses its enve- +lope, and the impact energy is added to the merged new body following +Broeg & Benz (2012), which determines what part of the gaseous enve- +lope is ejected. +4 The model also includes inward migration of planets as a result of +the stellar tides. +anti-ordered systems also tend to lose more planets via stellar +accretion (median is 6). +Amongst the four architecture classes, ordered systems seem +to undergo the greatest number of dynamical interactions. The +distribution of planet–planet mergers in ordered systems shows +a tail-like feature. The number of mergers ranges from 55 to 85, +with 62 being the median. All ordered systems eject at least five +planets. The number of ejections has a range from 5 to 35, and +the median is 23. The distribution of planets lost via the stel- +lar accretion channel shows a shift to the right. The number of +planets accreted by the star ranges from 0 to 20 with 8 being the +median. +A comprehensive picture of the role of dynamical history in +shaping the final architecture emerges from the four panels in +Fig. 5. Similar systems tend to rely only on the merger channel +for shaping their system architecture. As planetary systems in +all four architecture classes undergo a considerable number of +mergers, this channel may not suffice to explain or distinguish +the emergence of the four architecture classes. This is in line +with what was found before, namely that the emergence of the +similar class is mostly governed by the initial conditions. +While initial conditions seem to decide whether a system be- +comes similar or one of the other three architectures, there ap- +pears to be a trend in the role of dynamical interactions in shap- +ing mixed, anti-ordered, and ordered architectures. The distri- +butions of the ejection and accretion channels distinguish these +three architectures. These distributions show a shift to the right, +indicating that more planets are being lost via these two channels +as we move from mixed to anti-ordered and to ordered architec- +tures. Thus, we conclude that if initial conditions do not allow +a system to become similar, its fate is decided by its dynamical +history, among other effects. If the strength of the dynamical in- +teractions increases in a system, the architecture of the system +changes from mixed to anti-ordered or to ordered. +All systems in the Bern model start with 100 protoplane- +tary embryos. Above, we show that systems of different archi- +tecture show varying propensity to lose planets via the different +dynamical channels. This suggests that we should also see an ef- +fect of the dynamical history of the four architecture classes in +their multiplicity distribution. We observed this effect in Fig. 6 +of Paper I. We do not have a way to determine the initial num- +ber of embryos of the planetary systems we observe today. Our +approach may therefore not be directly applicable to observed +planetary systems. We remind the reader that while the quanti- +tative aspects we present in this section are probably model de- +pendent, the qualitative nature of these results is of paramount +importance. +5. The Aryabhata formation scenario +In this section, we propose a planet-formation scenario to ex- +plain a feature observed by Paper I (Sect. 5.4). We found that +many synthetic planetary systems have a peculiar water-mass- +fraction architecture namely that all planets hosted in these sys- +tems are water-rich worlds. We explain this peculiar feature with +the ‘Aryabhata formation scenario’. +The first exoplanets to be discovered were hot Jupiters — +giant planets orbiting their host stars at very short periods +(Mayor & Queloz 1995). Orbital migration was suggested as +a possible mechanism to explain these short periods (Lin et al. +1996; Lin & Ida 1997). Theoretical studies indicate that orbital +migration and planet–star tidal interactions should make many +close-in planets unstable. In the 1990s, Doug Lin described ‘the +last of the Mohicans’ scenario (Garaud 2011). In this scenario, +Article number, page 8 of 12 + +L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes +0 +20 +40 +60 +80 +100 +Planet Counts +0 +20 +40 +60 +80 +100 +Distribution of systems [%] +Similar Systems +Merged +Ejected +Star Accreted +0 +20 +40 +60 +80 +100 +Planet Counts +0 +20 +40 +60 +80 +100 +Distribution of systems [%] +Mixed Systems +Merged +Ejected +Star Accreted +0 +20 +40 +60 +80 +100 +Planet Counts +0 +20 +40 +60 +80 +100 +Distribution of systems [%] +Anti-Ordered Systems +Merged +Ejected +Star Accreted +0 +20 +40 +60 +80 +100 +Planet Counts +0 +20 +40 +60 +80 +100 +Distribution of systems [%] +Ordered Systems +Merged +Ejected +Star Accreted +Fig. 5. Effect of planet–planet interactions on system architecture. For each architecture class, the panels show a histogram of the counts of planet– +planet mergers, ejections, and stellar accretion occurring in the synthetic population. The y-axis in all panels is scaled to reflect the percentage of +systems in each of the four architecture classes. For example, 100% of all similar systems lost less than five planets via planetary ejection. +the protoplanetary disk gives rise to planets, many of which are +doomed to fall onto the star. The surviving observable planets +are those that were able to escape annihilation. +For some simulated systems, we noticed a modified version +of this scenario. Protoplanetary disks seem to give rise to planets +at different epochs. In the first epoch, several intermediate-mass +planets (1 − 100M⊕) are formed within the first 1Myr. Most of +these ‘first generation’ planets are subsequently lost mainly via +giant impacts (and a few are lost via orbital or tidal migration +leading to stellar accretion). This purging phase is catastrophic +to all planets that started within the ice line. Over the next few +million years, a second epoch sees the advent of a ‘second gen- +eration’ of planets. Most of these second-generation planets are +born outside the ice line, and are able to migrate inwards dur- +ing the disk lifetime. After disk dissipation, migration comes to +a halt and many of these planets survive long-term N-body evo- +lution in our simulations. We call this the Aryabhata formation +scenario. The key difference between the two scenarios is that +in the Aryabhata formation scenario (a) planets (surviving and +lost) are born in different epochs, and (b) most first-generation +planets are lost via giant impacts. +We quantify this scenario with the Aryabhata’s number, µ, +which is the ratio of the surviving planets that started inside the +ice line to the total number of surviving planets: +Aryabhata’s number: µ = +n(astart +embryo ≤ aice) +n +. +(2) +At the start of our calculations, all systems have an Aryabhata’s +number ≈ 0.5 ± 0.1. Figure 12 of Paper I (middle) shows the ice +mass fraction architecture of simulated planetary systems. The +colour of each point shows the Aryabhata’s number. +Most planetary systems with CS ( fice) ≈ CV( fice) ≈ 0 have +µ close to zero. This suggests that most (or all) of the surviv- +ing planets in such systems started outside the ice line. The for- +mation path of these systems falls into the Aryabhata formation +scenario. These classes of systems can be identified by two char- +acteristics: (i) the core water-mass fraction for different planets +in these systems is similar, and (ii) the core water-mass fraction +for most planets is high (owing to their origin outside the ice +line) making them water-rich planets. Approximately, one-fifth +of the simulated systems fall into this scenario. Among these, +about half are of similar class, one-third are anti-ordered, and +the remaining systems have either a mixed or ordered mass ar- +chitecture. +There exists an almost linear relationship between CV( fice) +and µ. Using scipy’s linear regression module, we obtain a slope +of 1.8 and intercept of 0.18 between these two quantities. The +correlation coefficient is R = 0.95, indicating a strong correlation +between the Aryabhata’s number and the coefficient of variation +of core water mass fraction. This suggests a possibility to iden- +tify observed exoplanetary systems that may have originated via +the Aryabhata formation scenario. By determining the CV( fice) +of a system, the Aryabhata’s number can be estimated. Systems +with low µ values probably arose from this scenario. +For systems that fall into the default scenario (positive +CS ( fice), implying an increasing core water mass fraction inside- +out), the Aryabhata’s number is µ > 0. We note that most sys- +tems with µ ⪆ 0.6 show similarity in their mass architecture. +Overall, the intra-system core water-mass-fraction architec- +ture of most planetary systems seems to take one of two forms. +(i) Those characterised by CS ( fice) ≈ CV( fice) ≈ 0 and µ = 0. +These systems are composed of water-rich planets wherein the +core water mass fraction is similar across the different planets. +All surviving planets in these systems started outside the ice line. +The Aryabhata formation scenario explains these systems. (ii) +Those with CS ( fice) > 0 and µ > 0. These systems represent +the ‘default’ or common outcome of our simulations. The plan- +etary core water-mass fraction in these systems increases from +one planet to another with increasing distance from the host star. +Some of the surviving planets started from inside the ice line. +At the extreme end, systems in which 60% or more surviving +planets started inside the ice line tend to have a similar mass +architecture. +6. Summary, conclusions, and future work +Paper I of this series introduced a novel, model-independent +framework for characterising the architecture of planetary sys- +tems at the system level. Planetary-system architectures can be +separated into four classes: similar, mixed, anti-ordered, and or- +dered. This classification is achieved via two quantities: the co- +efficient of similarity and the coefficient of variation. The math- +ematical CS versus CV architecture space was found to have +forbidden regions – regions in which no planetary system can +exist. In Paper I, the mass architecture classes of observed and +synthetic systems were characterised. The mass architecture of +synthetic systems was compared with their radii architecture, +bulk-density architecture, core-mass architecture, spacing archi- +tecture, and water-mass-fraction architecture. As in Paper I, we +identify a system’s architecture with its mass architecture. +In this paper, we explore the core-accretion-based formation +pathways —around a solar-like star— of the four classes of plan- +etary system architecture. We tried to disentangle the role of +nature (initial conditions of planet formation) from that of nur- +Article number, page 9 of 12 + +A&A proofs: manuscript no. 44705corr +ture (physical processes occurring during planet formation). Our +findings can be summarised as follows: +1. System-level analysis: Our findings show that a system- +level analysis of planetary system architecture via our ar- +chitecture framework (Paper I) provides an abundance of in- +formation. We show that planetary formation and evolution +process leave their imprint on the entire system architecture. +2. Solid disk mass: The initial amount of solids in the proto- +planetary disk in our models plays an important role in decid- +ing the architectural fate of a planetary system. Disks with a +solid mass (initial content of planetesimals) of ≲ 1MJ almost +always give rise to systems with similar architecture. Mixed +architectures arise most often from disks with solid masses +≈ 1MJ. Disks with solid mass ≳ 1MJ favour the production +of anti-ordered and ordered architectures. +3. Gas disk mass and metallicity: Initial gas disk mass and +stellar metallicity influences the final architecture of a plane- +tary system by controlling the initial mass of solids in the +disk. Metallicity, in our models, is simply related to the +dust-to-gas ratio, which allows us to convert a fraction of +the initial gas disk mass into initial dust mass (eq. 1). Ap- +plying the architecture framework on the synthetic systems +from the Bern model allows us to predict the existence of +a metallicity–architecture correlation. The observed correla- +tion between metallicity and final architecture is in qualita- +tive agreement with the Bern model. +4. Metallicity–architecture correlation: The architecture of a +planetary system correlates with the metallicity of the host +star. Most systems hosted by a low-metallicity star (Fe/H < +−0.2) are of similar architecture. As the metallicity of the +star increases, mixed, ordered, and anti-ordered architectures +become increasingly common. +5. Disk lifetime: The occurrence of systems of a similar ar- +chitecture around short-lived disks is high, and their fre- +quency reduces around long-lived disks. The frequency of +anti-ordered architecture increases as disk lifetime increases. +These correlations are mediated in at least two ways. First, +disks interact with planets, where orbital migration and ec- +centricity and inclination damping occur. Due to the ‘mi- +gration assisted merger’ correlation, long-lasting disks allow +planetary systems to have, in general, more planet–planet +mergers and inhibit planetary ejections. These dynamical +events shape a system’s final architecture. In addition, in our +model, disk lifetimes are correlated with disk masses, which +also strongly influences the system architecture. +6. Dynamical interactions: Planetary systems can signifi- +cantly alter their architecture via (at least) three dynamical +channels: planet–planet mergers, planetary ejections, and ac- +cretion by the host star. All architecture classes in our forma- +tion model were found to undergo numerous merger events. +Similar systems rely entirely on mergers to shape their final +architecture. As the strength of the dynamical interactions +experienced by a system (quantified by the number of ejec- +tions and/or accretions) increases, the architecture of a sys- +tem shifts from mixed to anti-ordered to ordered. +7. The Aryabhata formation scenario: Systems following +this formation scenario have the following formation path- +way. First-generation planets (formed within 1 Myr) are lost +mostly via giant impacts. Second-generation planets started +outside the ice line and migrated inwards. The surviving +planets are from the second generation and shape the ar- +chitecture of the system. This scenario explains about 20% +of simulated systems in which the core water-mass-fraction +architecture is different from the default scenario. Systems +following this formation scenario (i) host only those planets +that have a high core water-mass fraction and (ii) host only +those planets that started outside the ice line. We introduce +the Aryabhata’s number to identify those systems that follow +this formation scenario and find that 80% of all anti-ordered +simulated systems are formed via the Aryabhata formation +scenario. +8. Nature versus nurture: Overall, our model suggests that +initial conditions —or ‘nature’— dictate whether a system +will have a similar architecture or one of the other three ar- +chitecture classes, namely mixed, anti-ordered, or ordered +(via initial disk mass). If nature does not allow a system +to have a similar mass architecture, then the final architec- +ture is controlled by ‘nurture’, or dynamical interactions, +among other possible effects. As the dynamical interactions +increase, the final architecture tends to become mixed, anti- +ordered, and then ordered. +We would like to offer readers some warning when interpret- +ing our results. Although the architecture framework (from Pa- +per I) is model-independent, the present results hinge critically +on the underlying planet formation model – the Bern model. +There are several assumptions, simplifications, and choices to +be made when simulating synthetic planetary systems using the +Bern model. For example, the treatment of planet–planet merg- +ing collisions is relatively simple (Ali-Dib et al. 2022). We also +assume simplified planet-formation conditions; that is, our star– +disk–planet system is isolated enough so that we may ignore the +influence of the stellar neighbourhood, stellar flybys, and so on +(Bate 2012, 2018). The main strength of this study does not lie in +providing an explanation of the formation pathway of any partic- +ular system. Instead, our main result is the observation that when +groups of planetary systems are identified (architecture classes), +general trends in formation pathways emerge. This allowed us to +map the roles of nature and nurture in shaping the final architec- +ture of a planetary system. +The results of this study can be strengthened or challenged +in several observational and theoretical ways. We list some pos- +sibilities for future studies emerging from this work: +1. Linking disk mass distribution and architecture occur- +rence rates: Our model suggests that there should be a direct +relationship between the mass of the solid disk and the final +architecture of a system. While initial disk masses and the +final architecture of the same system will forever remain un- +observable, this relation can be tested statistically. The dis- +tribution of initial disk masses and the distribution of final +system architecture can be linked by formation models. We +speculate that in future, when these two distributions become +available, formation models can be used to predict one or the +other. In fact, this problem can also be turned around; we can +identify the right family of models as those that correctly link +the observed distributions of protoplanetary disk masses and +architecture occurrence rates. We believe such tests are cru- +cial for the development and eventual emergence of a stan- +dard model for exoplanetary astrophysics. +2. Metallicity–architecture correlation: Our work suggests +that the current architecture of a planetary system should be +related to the metallicity of its host star. As both of these +are observable, testing this metallicity–architecture correla- +tion should be feasible. Here, we used a catalogue of 41 ob- +served multi-planet systems (from Paper I) to test this corre- +lation. We find a qualitative agreement between theory and +Article number, page 10 of 12 + +L. Mishra et al.: Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes +observations. However, our observational catalogue suffers +from incompleteness and low-number statistics, which pre- +vents us from making any further assertions. More obser- +vational data are required to confirm or reject the proposed +metallicity-architecture correlation. It would also be interest- +ing to estimate the current architecture occurrence rate based +on the known metallicity distributions. +3. Confirming formation pathways: Confirming the forma- +tion pathways discovered in the present study with obser- +vations is challenging. However, the strength of our results +will increase if different planet-formation models are stud- +ied through the architecture framework. Hence, one possible +line of future work involves repeating the present study using +different planet-formation models. +4. Extending the architecture framework: So far, we have +calibrated our classification scheme for the mass architec- +tures only. Calibrating the architecture classification frame- +work on other quantities maybe useful. Especially for plan- +etary radii, which are observable via transit surveys, the use +of machine learning methods may be necessary. +5. Temporal evolution of system architecture: In the nomi- +nal Bern model population studied in this paper, protoplane- +tary embryos of 100 lunar masses are initialised in the pro- +toplanetary disk at the start. This necessarily implies that all +planetary systems start as similar type systems. It would be +interesting to inquire whether this is generally true in nature +as well. If this is the case, this implies that the ‘default’ ar- +chitecture of all planetary systems is similar and the phys- +ical processes playing out in the system evolve this archi- +tecture into other possibilities. Investigating this may lead to +deep insights into the structure of planetary system architec- +ture. In addition, such studies would be necessary to interpret +the observed architecture occurrences, as observed planetary +systems are seldom of the same age. +6. External perturbations: Stellar flybys or multi-planetary +systems around binaries provide excellent theoretical and ob- +servational laboratories with which to study the influence of +external perturbations on the architecture of planetary sys- +tems. This problem, when turned around, is also useful in +deducing the perturbed or dynamical (or lack of) history of +observed planetary systems. +This paper presents new insights obtained by analysing plan- +etary systems at the system-level. We showed that several pat- +terns emerged in the formation pathways of the four architecture +classes. These patterns linked the initial conditions of planet for- +mation with the final architecture of a system – bridging the vast +temporal gap of several billions of years between the birth of +planets to their final assembly. +Acknowledgements. This work has been carried out within the frame of the Na- +tional Centre for Competence in Research PlanetS supported by the Swiss Na- +tional Science Foundation. We acknowledge the support of the Swiss National +Fund under grant 200020_172746 and 200021_204847 “PlanetsInTime”. LM ac- +knowledges the generous hospitality of the "Planet Formation" workshop by the +Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded +by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) +under Germany’s Excellence Strategy – EXC-2094 – 390783311. +Data: The synthetic planetary populations (NGPPS) used in this work are avail- +able online at http://dace.unige.ch. Software: Python (Van Rossum & +Drake 2009), NumPy (Oliphant 2006), Seaborn (Waskom & the seaborn de- +velopment team 2020), Pandas (pandas development team 2020), Matplotlib +(Hunter 2007). +References +Adams, F. C. 2019, MNRAS, 488, 1446 +Adams, F. C., Batygin, K., Bloch, A. M., & Laughlin, G. 2020, Monthly Notices +of the Royal Astronomical Society, 493, 5520 +Adibekyan, V., Santos, N. C., Demangeon, O. D. S., et al. 2021, Astronomy & +Astrophysics, 649, A111 +Ali-Dib, M., Cumming, A., & Lin, D. N. C. 2022, MNRAS, 509, 1413 +Alibert, Y. 2019, Astronomy & Astrophysics, 624, A45 +Alibert, Y., Carron, F., Fortier, A., et al. 2013, Astronomy & Astrophysics, 558, +A109 +Alibert, Y., Mordasini, C., & Benz, W. 2004, Astronomy & Astrophysics, 417, +L25 +Alibert, Y., Mordasini, C., & Benz, W. 2011, A&A, 526, A63 +Alibert, Y., Mordasini, C., Benz, W., & Winisdoerffer, C. 2005, Astronomy & +Astrophysics, 434, 343 +Armitage, P. J. 2010, Astrophysics of Planet Formation +Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, Astronomy & Astro- +physics, 577, A42 +Bashi, D. & Zucker, S. 2021, A&A, 651, A61 +Bate, M. R. 2012, MNRAS, 419, 3115 +Bate, M. R. 2018, MNRAS, 475, 5618 +Benz, W., Ida, S., Alibert, Y., Lin, D., & Mordasini, C. 2014, in Protostars and +Planets VI, ed. H. Beuther, R. Klessen, C. Dullemond, & T. Henning (Uni- +versity of Arizona, Tucson), 691–713 +Broeg, C. H. & Benz, W. 2012, A&A, 538, A90 +Burn, R., Schlecker, M., Mordasini, C., et al. 2021, Astronomy & Astrophysics, +656, A72 +Chambers, J. E. 1999, Monthly Notices of the Royal Astronomical Society, 304, +793 +Ciardi, D. R., Fabrycky, D. C., Ford, E. B., et al. 2013, The Astrophysical Jour- +nal, 763, 41 +Clarke, C. J., Gendrin, A., & Sotomayor, M. 2001, Monthly Notices of the Royal +Astronomical Society, 328, 485 +Coleman, G. A. & Nelson, R. P. 2014, Monthly Notices of the Royal Astronom- +ical Society, 445, 479 +Dittkrist, K. M., Mordasini, C., Klahr, H., Alibert, Y., & Henning, T. 2014, As- +tronomy & Astrophysics, 567 [arXiv:1402.5969] +Emsenhuber, A., Mordasini, C., Burn, R., et al. 2021a, Astronomy & Astro- +physics, 656, A69 +Emsenhuber, A., Mordasini, C., Burn, R., et al. 2021b, Astronomy & Astro- +physics, 656, A70 +Fabrycky, D. C., Lissauer, J. J., Ragozzine, D., et al. 2014, The Astrophysical +Journal, 790, 146 +Fang, J. & Margot, J.-L. 2013, The Astrophysical Journal, 767, 115 +Fortier, A., Alibert, Y., Carron, F., Benz, W., & Dittkrist, K.-M. 2013, Astronomy +& Astrophysics, 549, A44 +Garaud, P. 2011, The Astrophysical Journal Letters, Volume 728, Issue 2, article +id. L30, 5 pp. (2011)., 728, L30 +Gilbert, G. J. & Fabrycky, D. C. 2020, The Astronomical Journal, 159, 281 +Gladman, B. 1993, Icarus, 106, 247 +He, M. Y., Ford, E. B., & Ragozzine, D. 2019, Monthly Notices of the Royal +Astronomical Society, 490, 4575 +He, M. Y., Ford, E. B., & Ragozzine, D. 2021, AJ, 161, 16 +Hueso, R. & Guillot, T. 2005, Astronomy & Astrophysics, 442, 703 +Hunter, J. D. 2007, Computing in science & engineering, 9, 90 +Jin, S., Mordasini, C., Parmentier, V., et al. 2014, ApJ, 795, 65 +Kipping, D. 2018, Monthly Notices of the Royal Astronomical Society, 473, 784 +Kokubo, E. & Ida, S. 1998, Icarus, 131, 171 +Kokubo, E. & Ida, S. 2002, The Astrophysical Journal, 581, 666 +Laskar, J. 1997, Large scale chaos and the spacing of the inner planets., Tech. +rep. +Laskar, J. 2000, Physical Review Letters, 84, 3240 +Laskar, J. & Petit, A. C. 2017, Astronomy & Astrophysics, 605, 1 +Lin, D. N., Bodenheimer, P., & Richardson, D. C. 1996, Nature, Volume 380, +Issue 6575, pp. 606-607 (1996)., 380, 606 +Lin, D. N. C. & Ida, S. 1997, The Astrophysical Journal, Volume 477, Issue 2, +pp. 781-791., 477, 781 +Lissauer, J. J., Ragozzine, D., Fabrycky, D. C., et al. 2011, The Astrophysical +Journal Supplement Series, 197, 8 +Lodders, K. 2003, The Astrophysical Journal, 591, 1220 +Lynden-Bell, D. & Pringle, J. E. 1974, Monthly Notices of the Royal Astronom- +ical Society, 168, 603 +Manara, C. F., Mordasini, C., Testi, L., et al. 2019, Astronomy & Astrophysics, +631, L2 +Marboeuf, U., Thiabaud, A., Alibert, Y., Cabral, N., & Benz, W. 2014a, Astron- +omy and Astrophysics, 570 [arXiv:1407.7282] +Marboeuf, U., Thiabaud, A., Alibert, Y., Cabral, N., & Benz, W. 2014b, Astron- +omy and Astrophysics, 570 [arXiv:1407.7271] +Matsuyama, I., Johnstone, D., & Murray, N. 2003, The Astrophysical Journal, +585, L143 +Mayor, M. & Queloz, D. 1995, Nature, 378, 355 +Article number, page 11 of 12 + +A&A proofs: manuscript no. 44705corr +Millholland, S., Wang, S., & Laughlin, G. 2017, The Astrophysical Journal, 849, +L33 +Millholland, S. C. & Winn, J. N. 2021, ApJ, 920, L34 +Mishra, L., Alibert, Y., Leleu, A., et al. 2021, Astronomy & Astrophysics, 656, +A74 +Mishra, L., Alibert, Y., & Udry, S. 2019, in EPSC-DPS Joint Meeting 2019, held +15-20 September 2019 in Geneva, Switzerland, id. EPSC-DPS2019-1616, +Vol. 2019, EPSC–DPS2019–1616 +Mishra, L., Alibert, Y., Udry, S., & Mordasini, C. 2023, Astronomy & Astro- +physics +Mordasini, C. 2018, in Handbook of Exoplanets, ed. H. J. Deeg & J. A. Bel- +monte, 143 +Mordasini, C., Alibert, Y., & Benz, W. 2009, Astronomy & Astrophysics, 501, +1139 +Mordasini, C., Alibert, Y., Georgy, C., et al. 2012a, Astronomy & Astrophysics, +547, A112 +Mordasini, C., Alibert, Y., Klahr, H., & Henning, T. 2012b, Astronomy & Astro- +physics, 547, A111 +Mulders, G. D., O’brien, D. P., Ciesla, F. J., Apai, D., & Pascucci, I. 2020 +Mulders, G. D., Pascucci, I., Ciesla, F. J., & Fernandes, R. B. 2021 +[arXiv:2107.12520] +Nakamoto, T. & Nakagawa, Y. 1994, The Astrophysical Journal, 421, 640 +Obertas, +A., +Van +Laerhoven, +C., +& +Tamayo, +D. +2017, +Icarus +[arXiv:1703.08426] +Oliphant, T. E. 2006, A guide to NumPy, Vol. 1 (Trelgol Publishing USA) +Paardekooper, S. J., Baruteau, C., & Kley, W. 2011, Monthly Notices of the +Royal Astronomical Society, 410, 293 +pandas development team, T. 2020, pandas-dev/pandas: Pandas +Petigura, E. A., Marcy, G. W., Winn, J. N., et al. 2018, The Astronomical Journal, +155, 89 +Petit, A. C., Laskar, J., & Boué, G. 2018, Astronomy & Astrophysics, 617, A93 +Pollack, J. B., Hubickyj, O., Bodenheimer, P., et al. 1996, Icarus, 124, 62 +Pu, B. & Wu, Y. 2015, The Astrophysical Journal, Volume 807, Issue 1, article +id. 44, 10 pp. (2015)., 807, 44 +Sandford, E., Kipping, D., & Collins, M. 2021, Monthly Notices of the Royal +Astronomical Society, Volume 505, Issue 2, pp.2224-2246, 505, 2224 +Santos, N. C., Israelian, G., Mayor, M., et al. 2005, Astronomy & Astrophysics, +437, 1127 +Sarkis, P., Mordasini, C., Henning, T., Marleau, G. D., & Mollière, P. 2021, +A&A, 645, A79 +Schib, O., Mordasini, C., Wenger, N., Marleau, G. D., & Helled, R. 2021, A&A, +645, A43 +Schlecker, M., Mordasini, C., Emsenhuber, A., et al. 2021a, Astronomy and As- +trophysics, 656, A71 +Schlecker, M., Pham, D., Burn, R., et al. 2021b, Astronomy and Astrophysics, +656, A73 +Shakura, N. I. & Sunyaev, R. A. 1973, Astronomy & Astrophysics, 24, 337 +Tamayo, D., Gilbertson, C., & Foreman-Mackey, D. 2020, Stability constrained +characterization of multiplanet systems +Thiabaud, A., Marboeuf, U., Alibert, Y., et al. 2014, Astronomy & Astrophysics, +562 [arXiv:1312.3085] +Tremaine, S. 2015, Astrophysical Journal, 807, 157 +Turrini, D., Zinzi, A., & Belinchon, J. A. 2020, Astronomy and Astrophysics, +636 [arXiv:2003.05366] +Udry, S. & Santos, N. C. 2007, Annual Review of Astronomy and Astrophysics, +45, 397 +Van Rossum, G. & Drake, F. L. 2009, Python 3 Reference Manual (Scotts Valley, +CA: CreateSpace) +Veras, D. & Armitage, P. J. 2004, Monthly Notices of the Royal Astronomical +Society, 347, 613 +Wang, Y., lin Zhou, J., yao Liu, F., et al. 2019, Monthly Notices of the Royal +Astronomical Society, 490, 359 +Waskom, M. & the seaborn development team. 2020, mwaskom/seaborn +Weiss, L. M., Marcy, G. W., Petigura, E. A., et al. 2018, The Astronomical Jour- +nal, 155, 48 +Winter, A. J., Kruijssen, J. M., Longmore, S. N., & Chevance, M. 2020, Nature, +586, 528 +Yeh, L.-C., Jiang, I.-G., & Gajendran, S. 2020, Astrophysics and Space Science, +365 [arXiv:2012.09431] +Article number, page 12 of 12 + diff --git a/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/load_file.txt b/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f326b98aad767ab482b575f909fa61a3257e95af --- /dev/null +++ b/0dE0T4oBgHgl3EQfdQAz/content/tmp_files/load_file.txt @@ -0,0 +1,1225 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf,len=1224 +page_content='Astronomy & Astrophysics manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr ©ESO 2023 January 9, 2023 A framework for the architecture of exoplanetary systems II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nature versus nurture: Emergent formation pathways of architecture classes Lokesh Mishra1, 2 , Yann Alibert1 , Stéphane Udry2 , and Christoph Mordasini1 1 Institute of Physics, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland e-mail: exomishra@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='com 2 Geneva Observatory, University of Geneva, Chemin Pegasi 51b, 1290 Versoix, Switzerland Received DD MMM YYYY;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' accepted DD MMM YYYY ABSTRACT In the first paper of this series, we proposed a model-independent framework for characterising the architecture of planetary systems at the system level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' There are four classes of planetary system architecture: similar, mixed, anti-ordered, and ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this paper, we investigate the formation pathways leading to these four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' To understand the role of nature versus nurture in sculpting the final (mass) architecture of a system, we apply our architecture framework to synthetic planetary systems — formed via core-accretion — using the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' General patterns emerge in the formation pathways of the four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Almost all planetary systems emerging from protoplanetary disks whose initial solid mass was less than one Jupiter mass are similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Systems emerging from heavier disks may become mixed, anti-ordered, or ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Increasing dynamical interactions (planet–planet, planet–disk) tends to shift a system’s architecture from mixed to anti-ordered to ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Our model predicts the existence of a new metallicity–architecture correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Similar systems have very high occurrence around low-metallicity stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The occurrence of the anti-ordered and ordered classes increases with increasing metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The occurrence of mixed architecture first increases and then decreases with increasing metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In our synthetic planetary systems, the role of nature is disentangled from the role of nurture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nature (or initial conditions) pre-determines whether the architecture of a system becomes similar;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' otherwise nurture influences whether a system becomes mixed, anti-ordered, or ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We propose the ‘Aryabhata formation scenario’ to explain some planetary systems which host only water-rich worlds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We finish this paper with a discussion of future observational and theoretical works that may support or refute the results of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Key words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planetary systems – Planets and satellites: detection – Planets and satellites: formation – Planets and satellites: physical evolution 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Introduction Studying planetary systems as single units of a physical sys- tem makes them amenable to system level examinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Inves- tigating the ensemble of bound objects (host star(s), planets, mi- nor bodies) coherently can allow a deeper and more compre- hensive understanding of exoplanetary astrophysics to emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The purview of this multi-body physics covers a breadth of topics including stability of planetary systems (Gladman 1993;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Laskar 1997, 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Chambers 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Fang & Margot 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Pu & Wu 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Laskar & Petit 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Obertas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Petit et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Yeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Tamayo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Turrini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020), stellar host and protoplanetary disk properties (Petigura et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Manara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mulders et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021), novel approaches to system-level characterisation (Tremaine 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Kipping 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Alibert 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Gilbert & Fabrycky 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Bashi & Zucker 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Sandford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021), and the architecture of planetary systems (Lissauer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Ciardi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Fabrycky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Weiss et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Millholland et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Adams 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Adams et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mulders et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Adibekyan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Millholland & Winn 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Winter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Analysing multi-body system level physics may allow us to understand whether planetary systems are self- organizing emergent structures – i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' whether global level pat- terns are emerging from local level interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Inspired by the peas in a pod architecture (Weiss et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Millholland et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021), we introduced a new framework for studying the architecture of planetary systems (Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2023, ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' hereafter Paper I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Studying the architecture as a global-system level phenomena, this framework allows us to characterise, quantify, and compare the architecture of individual planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Four classes of planetary system architecture emerged from this framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These classes are labelled simi- lar, mixed, anti-ordered, and ordered depending on the arrange- ment and distribution of planets around the host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The key idea behind this framework is that the arrangement and distribu- tion of planets contains additional information that cannot be ex- tracted by studying single planets individually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Hints of the pres- ence of this additional information were revealed in some works (Tremaine 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Laskar & Petit 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Kipping 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Gilbert & Fabrycky 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Sandford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Explaining the formation, evolution, and final assembly of planetary systems remains an outstanding theoretical problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planet-formation physics spans astronomical orders of magni- tude in mass, size, and time (Udry & Santos 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Armitage 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The processes occurring during planet formation convert gases and micron-sized dust particles from the protoplanetary disk into different kinds of planets arranged in different architec- tures over timescales of millions and billions of years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, it remains unclear as to how initial conditions derived from the host star or protoplanetary disk combine with the formation and Article number, page 1 of 12 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='02373v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='EP] 6 Jan 2023 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr evolution processes to give rise to the observed exoplanetary sys- tems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We are interested in understanding the role of nature ver- sus nurture in sculpting the final planetary system and the extent to which the character of the mature planetary system is influ- enced by its initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Kipping (2018) suggested, using an entropy-like formulation for planetary systems, that the ini- tial conditions of planet formation could be inferred based on their present-day architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, the presence of stochas- tic processes makes it difficult to connect the initial conditions with the final system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' It is also unclear as to whether or not stochastic physical processes can erase all memory of initial con- ditions, or indeed leave their own impressions on the final archi- tecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Using ideas from the fields of machine-learning-based natural language processing, Sandford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2021) showed that planetary systems are not randomly assembled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' While it is clear that planetary systems are not identical copies of one another, the quest for quantifying the similarity between planetary systems is a tantalising one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this paper, we investigate the formation pathways that lead to the four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Due to the stochastic na- ture of this problem, understanding the formation of a single planetary system can be very complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, two systems with almost identical initial conditions may evolve into two completely different planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Chaos arising from multi-body gravitational interactions may cause differing forma- tion pathways for these two systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, some patterns are found to emerge when studying planetary systems as part of an ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These trends, as we show in this paper, help us to understand the role played by initial conditions and physical processes in shaping the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 1 (bottom) summarises the main findings of this pa- per.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We show that the effects of planet formation and evolution processes are imprinted in the system-level architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Fig- ure 1 shows the formation pathways of the architecture classes that emerge due to the system-level approach of our architecture framework (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1 (top)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This sankey diagram has nodes for pro- toplanetary disk gas mass, protoplanetary disk solid mass, metal- licity, and planetary architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We find that the formation of similar planetary systems is dominated by initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If the initial conditions disfavour the formation of similar archi- tecture, the other three architectures may emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Whether the final architecture is mixed, ordered, or anti-ordered seems to de- pend on the stochastic formation processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Increasing dynam- ical interactions (disk–planet, planet–planet) generally tends to produce mixed, anti-ordered, and then ordered architectures, re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We first summarise the architecture framework and some re- sults from Paper I in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We study the role of nature (initial conditions) and nurture (dynamical processes) in Sects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3 and 4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In these sections, we study the influence of pro- toplanetary disk mass, metallicity, protoplanetary disk lifetime, planet–disk interactions, planet–planet interactions, and N-body interactions on the final architecture of simulated planetary sys- tems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We summarise our results, suggest possible future studies, and conclude this paper in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Summary of Paper I and the Bern model 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Architecture framework The arrangement of multiple planets and the collective distri- bution of their physical properties around the host star(s) char- acterises the architecture of a planetary system (Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Distance from star Quantity (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mass) Similar Anti-Ordered Ordered Mixed Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The four classes of planetary system architecture and their emer- gent formation pathways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Top: Reproduced from Paper I – Schematic diagram depicting the Four classes of planetary system architecture: similar, anti-ordered, mixed, and ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Depending on how a quantity (such as mass or size) varies from one planet to another, the architecture of a system can be identi- fied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The framework is model independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Bottom: Emergence of formation pathways: Sankey diagram depicting the emergence of formation pathways of architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The thick- ness of the links and nodes is proportional to the relative number of synthetic systems in our simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This result is derived from syn- thetic planetary systems around a solar mass star via the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disk gas mass and metallicity are binned at their median values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' To quantify the architecture of a planetary system, we de- veloped a novel model-independent framework Paper I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Some key aspects of this framework are briefly summarised here, and we refer the reader to Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3 of Paper I for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Conceptually, the framework defines four classes of plane- tary system architecture: similar, mixed, anti-ordered, and or- Article number, page 2 of 12 Protoplanetary Disk Mgas < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='03Mo disk Gas Star Metallicity [Fe/H] < 0 [Fe/H] ≥0 Protoplanetary Disk ≥ 1MJ Solids Planetary System Architecture Class Ordered Similar Ordered Mixed AntiL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' : Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes dered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Consider a planetary quantity (such as mass, radius, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=') as a function of the distance of the planet to the host star (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' When all planets in a system have similar values of a plane- tary quantity, the architecture of such systems is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' When the planetary quantity increases with increasing distance, the system is said to exhibit an ordered architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Alternatively, if the quantity shows an overall decreasing trend with increas- ing distance, the architecture is considered to be anti-ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Finally, the planetary quantities could also show variations that are not captured in the three classes above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A mixed architec- ture may depict large, increasing, and decreasing variations with distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' By studying the variation of a planetary quantity with distance for all planets in the system, our framework captures the arrangement and distribution of planets in the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The architecture of a system is quantified via two coeffi- cients: the coefficient of similarity, CS (qi), and the coefficient of variation, CV(qi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Here, qi represents a planetary quantity (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' mass, radius, eccentricity, density) for the ith planet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' When the coefficients are calculated using planetary masses, they inform us about the mass architecture of a system, that is, the arrange- ment and distribution of mass in a given system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Likewise, we can study the radius architecture, density architecture, water- mass-fraction architecture, eccentricity architecture, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The versatility of our architecture framework lies in its ability to allow us to study the multifaceted architectures of a planetary system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In Paper I, we explored the relationship between these different kinds of architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As in Paper I, we identify the architecture of a system by its bulk mass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Calibrated on planetary masses, a classification scheme to identify the architecture class was proposed in Paper I (eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The CS versus CV plane represents the architecture space for planetary systems (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3 in Paper I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This new parameter space was found to be endowed with a curious mathematical property, namely planetary systems cannot occupy all parts of the archi- tecture plane, as some regions of this parameter space are math- ematically forbidden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' To understand the implications of this architecture frame- work, we applied it on several catalogues in Paper I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These included 41 observed multi-planetary systems and numerically simulated systems via population synthesis using the Generation III Bern model (Emsenhuber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a,b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Bern model For the synthetic planetary systems, as the initial conditions and the physical processes are known, it is possible (and desirable) to understand how different architecture classes are formed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As this paper is dedicated to planet formation and its imprints on archi- tecture, we briefly review the ingredients of the Bern model here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Readers interested in further details of this model are referred to the recent NGPPS series of papers (Emsenhuber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a,b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Schlecker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Burn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Schlecker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The historic development of the Bern model may be traced through the works of Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2004, 2005);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mordasini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2011);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mordasini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2012a,b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Fortier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Marboeuf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2014b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Thiabaud et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Dittkrist et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2014) and is reviewed in Benz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mordasini (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Based on the core-accretion paradigm (Pollack et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1996), the Bern model is a global model of planet formation and evo- lution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The model studies the growth of several lunar-mass pro- toplanetary embryos embedded in protoplanetary disks (consist- ing of a gaseous and solid phase) around a solar-type star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The disk model is based on viscous angular momentum transport (Lynden-Bell & Pringle 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Veras & Armitage 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Hueso & Guillot 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Turbulence is characterised by the Shakura & Sunyaev (1973) approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The initial mass of the solid disk de- pends on the metallicity of the star and also on the condensation state of the molecules in the disk (Thiabaud et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The solids in the disk are composed of a swarm of rocky and icy planetesimals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The solids in the disk evolve via (a) accretion by growing planets, (b) interaction with gaseous disk, (c) dynamical stirring from planets and other planetesimals, and so on (Fortier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The 1D geometrically thin disk evolution is studied up to 1000 au.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This star–disk–embryo numerical system is endowed with several physical processes, which are occurring simultaneously and in a self-consistently coupled way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Some of these physical processes are: stellar evolution (Baraffe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2015), interactions between viscous protoplanetary disk and star (Lynden-Bell & Pringle 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Shakura & Sunyaev 1973;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Clarke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mat- suyama et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Veras & Armitage 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nakamoto & Nak- agawa 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Hueso & Guillot 2005), condensation of volatile and/or refractory species (Marboeuf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014b,a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Thiabaud et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014), planet formation physics (Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Fortier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mordasini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2012b), orbital and tidal migration (Coleman & Nelson 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Paardekooper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Dittkrist et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014), gravitational N-body interactions (Chambers 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Emsenhuber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a,b), atmospheric es- cape (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014), bloating (Sarkis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021), and so on (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1 in Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2019) for a schematic diagram).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In addition, the model also calculates the internal structure of all planets, assuming them all to be spherically symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the synthetic planetary population we use in the present work, some initial conditions are fixed, namely we use a 1M⊙ mass star and a disk viscosity α = 2 × 10−3, describing the ini- tial shape of the gas and planetesimal disks via power laws (Ve- ras & Armitage 2004), with a planetesimal size of 300m, and a fixed density (rocky 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 g cm−3, icy 1 g cm−3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We add 100 pro- toplanetary embryos to the protoplanetary disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We ensure that no two embryos start within 10 hill radii of each other (Kokubo & Ida 1998, 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This model is then run 1000 times while varying other initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We varied the initial gas mass in the protoplanetary disk, disk lifetime, stellar metallicity, disk in- ner edge, and the initial location of the protoplanetary embryos (for details see Emsenhuber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The Bern model includes a significant variety of physics and uses plausible choices of initial conditions, which are mo- tivated by observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, it is only a simplified low- dimensional approximation of our current understanding of planet formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, we model planet formation via core-accretion only and ignore other methods, such as disk insta- bility (Schib et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Among others, we also assume that the dust-to-gas ratio is the same for both the host star and the disk, and that all dust in the disk is aggregated into planetesimals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The N-body interactions are tracked for only 20 Myr, which may be inadequate to capture dynamical effects occurring in the outer parts of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The assumptions, choices, and simplifica- tions made in this model may have a strong impact on the out- come of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nevertheless, exploring the implications of our architecture framework using synthetic populations via the Bern model is a necessary first step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The main result of this pa- per is not in understanding the formation of any single plane- tary system but to show that, for different architecture classes, discernible patterns of formation pathways emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Future stud- ies could apply our architecture framework (from Paper I) with other planet formation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If the formation pathways for the Article number, page 3 of 12 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr different architecture classes were found to remain the same af- ter using different formation models, then our results would be strengthened and become more robust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nature: Role of star and disk initial conditions In this section, we study the connection between the initial con- ditions and the final architecture of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We begin by count- ing the number of different architecture classes that emerge from our population synthesis as a function of the various initial con- ditions that are varied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The role of varying disk masses and stel- lar metallicities is presented in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1, and that of varying disk lifetimes in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For completeness, we measure the relative count for an architecture class within a bin by dividing the num- ber of systems of a particular architecture class in a bin by the total number of systems in that bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We emphasise that, as in Pa- per I, the architecture of a system is identified with its bulk mass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Thus, when we refer to a similar or ordered sys- tem, we are referring to a system whose bulk mass architecture is similar or ordered, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Protoplanetary disk: Mass and stellar metallicity Figure 2 (upper left) shows the dependence of the architecture class relative counts on the initial mass of gas in the protoplan- etary disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Over 96% of all disks that started with gas masses ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='04M⊙ give rise to planetary systems of similar architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' About 1% of these low-mass disks lead to each of the other three architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of systems with similar architecture shows a clear decreasing trend with increasing mass in the disk gas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The production of the remaining three architecture classes tends to increase with increasing disk gas mass, but with dis- tinct trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As the mass in the gas disk increases, the relative count of mixed architectures increases first, and then decreases for gas mass ≳ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='12M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count for both anti-ordered and ordered architectures continues to increase with increasing disk mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Anti-ordered architectures become the most common outcome from large disks with gas mass ≳ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='12M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2 (upper right), we see the binned relative count of different architecture classes as a function of the mass of the solids in the protoplanetary disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This plot shows some of the same features that we saw in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2 (upper left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' About 99% of all disks that have solid masses ≲ 200M⊕ give rise to similar plan- etary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The production of similar architecture decreases as the mass of solids in a disk is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Before continuing, we note that this is already a result of considerable importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The physical processes encoded in the Bern model are the same for all 1000 planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The only difference between these synthetic systems arises from the variations in their initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We are seeing that almost all low-mass disks give rise to only one architecture, the similar class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This occurs despite all the physical processes that can act upon the system and induce some architectural variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As we show below, the low mass of the disk limits some of the phys- ical processes that sculpt a system’s architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We conclude that the production of systems of the similar architecture class is dominated by initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Close to 60% of all observed systems in our multi-planetary systems catalogue (from Paper I) are similar in their mass ar- chitecture (Paper I).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For some of these similar class systems (like Trappist-1, TOI-178, etc), if their formation is via core- accretion, our work may suggest strong limits on the initial mass of their protoplanetary disks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of the other three architecture classes in- creases as the solid mass in the disk increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The production of mixed architectures peaks around disks of ≈ 1MJ and then decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The prevalence of anti-ordered and ordered architec- tures continues to increase with increasing disk mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For heavy massive disks, anti-ordered architecture is the most common out- come.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 2 (middle left) shows the relative count of each ar- chitecture class in the synthetic population as a function of stel- lar metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 2 (middle right) shows the same for the 41 observed multi-planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The selection criterion for our observed catalogue is detailed in Paper I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We find an inter- esting correlation between the metallicity and the architecture of a system, hereafter referred to as the metallicity–architecture correlation, and note the following trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Over 98% of all sys- tems with Fe/H < −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 are of similar type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of similar architecture decreases as the metallicity is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative counts of the other three architecture classes are be- low 5% for metallicities ≤ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' At different rates, the relative counts of mixed, ordered, and anti-ordered classes increase with increasing metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Our catalogue of observed planetary sys- tems shows an encouragingly similar trend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Our observations catalogue suffers from detection biases and incompleteness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' One way in which these limitations manifest is that we do not find any observed example of anti-ordered architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The qualitative trend for the relative count of ob- served system architectures as a function of their stellar metal- licity agrees with our synthetic systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, the rela- tive count of similar observed systems decreases with increasing metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of ordered architectures continues to increase with increasing metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' To understand the origin of these correlations, we study the relation between initial disk mass (both in solids and gases), stel- lar metallicity, and the final architecture of the systems in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the Bern model, the initial solid mass of the disk is a fraction of the initial gas mass of the disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This fraction is cor- related with the dust-to-gas ratio, which also depends on the gas mass itself because the location of different icelines depend on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' By simulating systems with varying dust-to-gas ratio (fD/G), we simulate systems around stars with different metallicities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This is due to the following relation: 10[Fe/H] = fD/G fD/G,⊙ , fD/G,⊙ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0149 (Lodders 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (1) The metallicities in our simulations vary from −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='5 fol- lowing Santos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 2 shows the solid disk mass as a function of the gas disk mass (bottom left) and the total mass in the planets as a function of the solid disk mass (bottom right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Each point rep- resents one planetary system, and the shape and colour of the marker shows its final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These two plots help us un- derstand the correlations discussed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The bottom left panel of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2 shows the relationship be- tween gas disk mass, solid disk mass, metallicity, and the final ar- chitecture of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Generally, when the mass of the solids in a disk is ≳ 1MJ(≈ 318M⊕), the production of architectures other than similar is triggered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We note that up to a certain gas disk mass (≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='02M⊙), irrespective of the metallicity, all disks lead to similar architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For heavier gas disks (≳ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='02M⊙), metallicities begin to play a role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If the gas disk mass is high enough, even low metallicities (≈ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2) can trigger the produc- tion of architectures other than the similar class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, for lower gas disk masses, higher metallicities are required to pro- duce about a 1MJ mass in the solid disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Article number, page 4 of 12 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' : Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='16 Protoplanetary Disk: Gas Mass [M ] 0 20 40 60 80 100 Relative count of planetary systems [%] Bern Model Similar Anti-Ordered Mixed Ordered 0 200 400 600 800 1000 Protoplanetary Disk: Solid Mass [M ] 0 20 40 60 80 100 Relative count of planetary systems [%] Bern Model Similar Anti-Ordered Mixed Ordered 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 Metallicity [Fe/H] 0 20 40 60 80 100 Relative count of planetary systems [%] Bern Model Similar Anti-Ordered Mixed Ordered 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='4 Metallicity [Fe/H] 0 20 40 60 80 100 Relative count of planetary systems [%] Observations Similar Anti-Ordered Mixed Ordered 10 2 10 1 Protoplanetary Disk: Gas Mass [M ] 10 1 10 2 10 3 Protoplanetary Disk: Solid Mass [M ] 1MJ 318M Bern Model Similar Anti-Ordered Mixed Ordered [Fe/H] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='5 [Fe/H] = -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 10 1 10 2 10 3 Protoplanetary Disk: Solid Mass [M ] 10 0 10 1 10 2 10 3 10 4 Total Mass in Planets [M ] 10 % 100 % 1MJ 318M Bern Model Similar Anti-Ordered Mixed Ordered Efficiency of solid accretion[%] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Role of disk mass and the metallicity–architecture correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The top two rows show the binned relative count of each architecture class as a function of initial disk gas mass (upper left), disk solid mass (upper right), stellar metallicity in the synthetic population (middle left), and stellar metallicity in observed systems (middle right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The length of the error bars corresponds to the total number of systems in each bin as: 100/ √ bin counts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the bottom panels, each point corresponds to a single planetary system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The system architecture is indicated by the colour and shape of the marker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The bottom left panel shows the solid mass in the disk as a function of the disk gas mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The two diagonal lines convey the role of stellar metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The dashed horizontal line indicates the mass of Jupiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The bottom right panel shows the total mass in planets as a function of the solid mass in the protoplanetary disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The two diagonal lines indicate the efficiency of converting solids from the disk into planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If the planets in a hypothetical system could accrete all the solid mass of its disk, and these planets had no gaseous atmosphere, then such a system would lie on the diagonal line corresponding to 100% accretion efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The dashed vertical line indicates the mass of Jupiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Article number, page 5 of 12 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr It is clear that the mass in the solids of the protoplanetary disk plays an essential role here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The bottom right panel of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2 explains the above statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The total mass in the planets increases as the mass of solids in the disk increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' When the mass of solids in the disk is ∼ 1MJ, the distribution of total mass in planets shows a jump.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This is because massive planets can be- gin to accrete significant amounts of gas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For the core-accretion scenario, this plot suggests that similar architectures occur for low-mass disks because they cannot produce massive giant plan- ets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Gas giants are very effective in inducing dynamical stirring, which are in turn responsible for shaping the system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This signifies the role played by physical processes in producing the mixed, anti-ordered, and ordered architectures1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Lifetime of the protoplanetary disk In this section, we explore the role of disk lifetime (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' the age of a protoplanetary disk) in defining the final architecture class of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The lifetime of a disk, in the Bern model, is influ- enced by the external disk photo-evaporation rate (see Emsenhu- ber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2021a) for details) and the mass of the disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 3 (left) shows the binned relative count of system ar- chitecture as a function of disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' About 80% of all disks with lifetimes ranging from 1 to 5 Myr produce systems of the similar architecture class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of similar systems decreases as disk lifetime increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative count of mixed architecture does not show any significant variation with disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The relative counts of anti-ordered and ordered archi- tectures vary as the disk lifetime increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This suggests that the physical mechanisms by which disks shape the final archi- tectures of systems play a role in shaping similar, anti-ordered, and ordered architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The trends of the relative counts of architecture classes with disk lifetime are similar to the distribution of relative counts as functions of disk mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We would like to understand whether system architecture is influenced by disk lifetime directly or via an inherent dependence of disk lifetime on disk mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The right panel of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3 shows the gas disk mass as a function of disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The scatter plot depicting each individual disk shows that, generally, low-mass disks have short lifetimes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The solid lines depict the average gas mass for each architecture class for each disk lifetime bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The gas mass of the disks that go on to form systems of mixed, anti-ordered, or ordered architecture shows a weak de- pendence on disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' On average, the more massive disks seem to last longer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For disks that give rise to the similar archi- tecture class, this trend is clearly visible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If more massive disks also live longer, this partly explains the relative count distribu- tion seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3 (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, disks also affect the planetary architecture in other interesting ways, namely orbital migration and eccentricity, and inclination damping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We study the effect of these planet–disk interactions in shaping system architecture in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1 The architecture framework is not sensitive to the absolute value of a planetary quantity, such as mass, but only the ratio of the quantities for adjacent planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Independent of the architecture framework, we will present another system-level framework analysing the state of a plane- tary system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This other classification framework is sensitive to the abso- lute mass of a planet and will address the role of giant planets on system- level properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The state classification framework reveals a drastic dif- ference between systems with and without giant planets (Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' in prep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nurture: Role of dynamical stirring Whether or not the final architecture of a planetary system is pre-determined by its initial conditions from the host star and the protoplanetary disk remains unclear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If not, the mechanism by which dynamical processes shape the architecture of a plane- tary system remains to be determined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' It also remains unclear as to whether or not dynamical processes remove all traces of ini- tial conditions from the final system, or whether these stochastic processes leave their impressions on the final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this section, we try to answer these questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We focus our attention on dynamical interactions between planets and the protoplane- tary disk, and the gravitational multi-body interactions amongst planets themselves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' While there exist several dynamical mechanisms that shape the final architecture, we simplify the task before us by concen- trating on violent dynamical instabilities that change a planetary system in a non-trivial manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For each synthetic planetary sys- tem, we count the number of planet–planet mergers, planetary ejections, and planets falling into their host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We use these counts as a proxy to assess the strength of dynamical interactions that occur in a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the subsequent subsections, we study planet–disk interactions and planet–planet interactions (mergers, ejections, stellar accretion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These dynamical effects give rise to stochasticity and are thereby inherently unpredictable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, we hope that the underlying dynamical processes that are sculpt- ing the system architecture emerge as patterns in the counts of these violent events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planet–disk interactions Protoplanetary disks interact with planets via several mecha- nisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planets may experience orbital migration via gravitation interactions with the disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Low-mass planets undergo type I mi- gration, which in the Bern model is implemented following the approaches of Coleman & Nelson (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Paardekooper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Massive planets may open a gap in the disk and undergo type II migration (Dittkrist et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The disk also dampens the eccentricity and inclination of planets, which is coherently applied within the N-body integrator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Readers interested in the details of the implementation are referred to Emsenhuber et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2021a,b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 4 (left) shows the count of mergers and ejections for each planetary system in our synthetic population as a function of the lifetime of its protoplanetary disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For an easier visualisa- tion of any underlying trend, we also show the average merger and ejection counts for each disk lifetime bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of planet–planet mergers shows a clear correlation with disk life- time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disks that live longer usually give rise to planetary sys- tems that undergo more mergers than short-lived disks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We refer to this correlation as ‘migration assisted mergers’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' One possible explanation for this correlation could be that disks allow plan- ets to migrate depending on their mass 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Two adjacent planets that are not migrating at the same rate, perhaps owing to their different masses, can come close enough for a merger to occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of ejections does not show any clear trend with disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disks dampen a planet’s eccentricity and inclination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As ejection requires extremely violent interactions (marked by 2 There could be other scenarios which contribute to the ‘migration as- sisted mergers’ correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, migration may allow planets to become more massive by accreting more material due to increased access to planetesimals (Alibert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Massive planets may inter- act more amongst themselves, leading to more mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Article number, page 6 of 12 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' : Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 Protoplanetary Disk: Lifetime [Myr] 0 20 40 60 80 100 Relative count of planetary systems [%] Bern Model Similar Anti-Ordered Mixed Ordered 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 Disk Lifetime [Myr] 10 2 10 1 Protoplanetary Disk: Gas Mass [M ] Bern Model Similar Anti-Ordered Mixed Ordered Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Role of disk lifetime on system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Left: Binned relative counts of architecture classes as a function of disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The length of error bars corresponds to the total number of systems in each bin, as: 100/ √ bin counts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Right: Scatter plot shows the disk gas mass as a function of disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The solid lines show the binned average gas disk mass for each architecture class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 Disk Lifetime [Myr] 0 20 40 60 80 100 Counts Bern Model Mergers Ejections 0 20 40 60 80 100 Planet counts 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='10 Density Bern Model Ejections: w/ planet-disk interactions Mergers: w/ planet-disk interactions Ejections: w/o planet-disk interactions Mergers: w/o planet-disk interactions Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Effect of planet–disk interactions on architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Left: Scatter plot shows the number of planet–planet mergers and planetary ejections that occurred in systems as a function of disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The solid lines show the average counts for each disk lifetime bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Right: Distribution of the total number of mergers (dashed) and ejections (solid) for the entire synthetic population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The black line depicts the nominal synthetic population, and the red line depicts a different synthetic population in which the disk-)planet interactions were artificially switched off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' high eccentricities and inclinations), disks may essentially in- hibit planetary ejections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' To test these ideas, we simulated another population of 1000 planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this population (NG140), planet–disk in- teractions (gas-driven migrations, and eccentricity and inclina- tion damping) are artificially switched off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For all such systems, we count the number of mergers and ejections and compare them with our nominal population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 4 (right) shows the distribu- tion of the number of planet–planet mergers and planetary ejec- tions in the two populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As expected, the number of planet–planet mergers decreases (distribution shifts to the left) when planet–disk interactions are switched off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This confirms the migration-assisted mergers cor- relation presented above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The distribution of ejections, on the other hand, increases significantly when planet–disk interactions are switched off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' When the damping of the planetary eccentricity and inclination by the disk is switched off, the gravitational in- teractions between planets increases, such that many planets are ejected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We make two observations from the results presented so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' First, counts of mergers and ejections seem to be a good proxy for the prevalence of dynamical interactions, as they cap- ture some of the well-established dynamical effects concern- ing planet–disk interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Second, we observe that disks af- Article number, page 7 of 12 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr fect system architecture in a multitude of ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' While disk mass shows a direct relation to final architecture, disks also af- fect system architecture indirectly by influencing the dynami- cal interactions that occur therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Long-living disks give rise to more mergers and inhibit planetary ejections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Conversely, sys- tems emerging from short-lived disks experience fewer mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planet–planet interactions Above, we show that planet–disk interactions in the Bern model may influence the dynamical interactions occurring in a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Now, in this section, we are interested in understanding how these violent events shape the final architecture of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planets interact with each other gravitationally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These multi- body interactions are tracked via a N-body integrator in the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The end result of some of the more violent interactions is that planets are lost via one of several channels: planet–planet mergers3, planetary ejections, accretion by the host star, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These channels allow a planetary system to fundamentally alter itself and its architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 5 shows, for each architecture class, the distribution of planet–planet mergers and the number of planets lost via ejec- tions and stellar accretion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' At first glance, losing planets to the host star may not seem appropriate for planet–planet interac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, many of these planets meet their fate, in the Bern model, when they are pushed inwards after being captured in mean-motion resonances with other planets4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Therefore, this channel of losing planets is included here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We caution the reader that the absolute number of planets lost via any channel is model- dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The quantity of interest here is the relative difference between the different architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 5 suggests that the similar architecture class is almost completely shaped by planet–planet mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Most similar sys- tems in our simulations have between 40 and 80 mergers taking place within them, and the median number of mergers is 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Vi- olent dynamical interactions that lead to the ejection of planets seems to be very rare in this architecture type, as 100% of all similar systems lose less than five planets via planetary ejection (median ejections is 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Likewise, similar systems seem to not rely on the stellar accretion channel for losing planets (median stellar accretions is 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Systems with mixed architecture also undergo many planet– planet mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of mergers in mixed systems ranges from 50 to 85, and the median of mergers is 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In a clear con- trast from similar architectures, the ejection and stellar accretion channels play an important role for mixed systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The median number of planets lost via ejections is 7, and via stellar accre- tions is 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' anti-ordered systems utilise all three dynamical channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The distribution of mergers in anti-ordered systems is roughly similar to that of mixed systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The range is between 50 and 85 and the median number of mergers is 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, anti-ordered systems tend to lose more planets via the ejection channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of planets lost via dynamical ejection ranges from 0 to 35 with a median value of 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Compared to mixed systems, 3 In our model, when the distance between two planets becomes smaller than the sum of their radii, a planet–planet collision is said to oc- cur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We treat such merger events in a simplified manner: the cores of the target–impactor pair are merged, the lesser massive body loses its enve- lope, and the impact energy is added to the merged new body following Broeg & Benz (2012), which determines what part of the gaseous enve- lope is ejected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4 The model also includes inward migration of planets as a result of the stellar tides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' anti-ordered systems also tend to lose more planets via stellar accretion (median is 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Amongst the four architecture classes, ordered systems seem to undergo the greatest number of dynamical interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The distribution of planet–planet mergers in ordered systems shows a tail-like feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of mergers ranges from 55 to 85, with 62 being the median.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' All ordered systems eject at least five planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of ejections has a range from 5 to 35, and the median is 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The distribution of planets lost via the stel- lar accretion channel shows a shift to the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The number of planets accreted by the star ranges from 0 to 20 with 8 being the median.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A comprehensive picture of the role of dynamical history in shaping the final architecture emerges from the four panels in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Similar systems tend to rely only on the merger channel for shaping their system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As planetary systems in all four architecture classes undergo a considerable number of mergers, this channel may not suffice to explain or distinguish the emergence of the four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This is in line with what was found before, namely that the emergence of the similar class is mostly governed by the initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' While initial conditions seem to decide whether a system be- comes similar or one of the other three architectures, there ap- pears to be a trend in the role of dynamical interactions in shap- ing mixed, anti-ordered, and ordered architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The distri- butions of the ejection and accretion channels distinguish these three architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These distributions show a shift to the right, indicating that more planets are being lost via these two channels as we move from mixed to anti-ordered and to ordered architec- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Thus, we conclude that if initial conditions do not allow a system to become similar, its fate is decided by its dynamical history, among other effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If the strength of the dynamical in- teractions increases in a system, the architecture of the system changes from mixed to anti-ordered or to ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' All systems in the Bern model start with 100 protoplane- tary embryos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Above, we show that systems of different archi- tecture show varying propensity to lose planets via the different dynamical channels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This suggests that we should also see an ef- fect of the dynamical history of the four architecture classes in their multiplicity distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We observed this effect in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 6 of Paper I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We do not have a way to determine the initial num- ber of embryos of the planetary systems we observe today.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Our approach may therefore not be directly applicable to observed planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We remind the reader that while the quanti- tative aspects we present in this section are probably model de- pendent, the qualitative nature of these results is of paramount importance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The Aryabhata formation scenario In this section, we propose a planet-formation scenario to ex- plain a feature observed by Paper I (Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We found that many synthetic planetary systems have a peculiar water-mass- fraction architecture namely that all planets hosted in these sys- tems are water-rich worlds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We explain this peculiar feature with the ‘Aryabhata formation scenario’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The first exoplanets to be discovered were hot Jupiters — giant planets orbiting their host stars at very short periods (Mayor & Queloz 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Orbital migration was suggested as a possible mechanism to explain these short periods (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1996;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Lin & Ida 1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Theoretical studies indicate that orbital migration and planet–star tidal interactions should make many close-in planets unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the 1990s, Doug Lin described ‘the last of the Mohicans’ scenario (Garaud 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this scenario, Article number, page 8 of 12 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' : Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Planet Counts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Distribution of systems [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Similar Systems ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Merged ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Ejected ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Star Accreted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Planet Counts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Distribution of systems [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Mixed Systems ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Merged ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Ejected ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Star Accreted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Planet Counts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Distribution of systems [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Anti-Ordered Systems ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Merged ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Ejected ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Star Accreted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Planet Counts ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Distribution of systems [%] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Ordered Systems ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Merged ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Ejected ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Star Accreted ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Effect of planet–planet interactions on system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For each architecture class, the panels show a histogram of the counts of planet– planet mergers, ejections, and stellar accretion occurring in the synthetic population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The y-axis in all panels is scaled to reflect the percentage of systems in each of the four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, 100% of all similar systems lost less than five planets via planetary ejection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' the protoplanetary disk gives rise to planets, many of which are doomed to fall onto the star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The surviving observable planets are those that were able to escape annihilation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For some simulated systems, we noticed a modified version of this scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Protoplanetary disks seem to give rise to planets at different epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In the first epoch, several intermediate-mass planets (1 − 100M⊕) are formed within the first 1Myr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Most of these ‘first generation’ planets are subsequently lost mainly via giant impacts (and a few are lost via orbital or tidal migration leading to stellar accretion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This purging phase is catastrophic to all planets that started within the ice line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Over the next few million years, a second epoch sees the advent of a ‘second gen- eration’ of planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Most of these second-generation planets are born outside the ice line, and are able to migrate inwards dur- ing the disk lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' After disk dissipation, migration comes to a halt and many of these planets survive long-term N-body evo- lution in our simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We call this the Aryabhata formation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The key difference between the two scenarios is that in the Aryabhata formation scenario (a) planets (surviving and lost) are born in different epochs, and (b) most first-generation planets are lost via giant impacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We quantify this scenario with the Aryabhata’s number, µ, which is the ratio of the surviving planets that started inside the ice line to the total number of surviving planets: Aryabhata’s number: µ = n(astart embryo ≤ aice) n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2) At the start of our calculations, all systems have an Aryabhata’s number ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='5 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Figure 12 of Paper I (middle) shows the ice mass fraction architecture of simulated planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The colour of each point shows the Aryabhata’s number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Most planetary systems with CS ( fice) ≈ CV( fice) ≈ 0 have µ close to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This suggests that most (or all) of the surviv- ing planets in such systems started outside the ice line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The for- mation path of these systems falls into the Aryabhata formation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These classes of systems can be identified by two char- acteristics: (i) the core water-mass fraction for different planets in these systems is similar, and (ii) the core water-mass fraction for most planets is high (owing to their origin outside the ice line) making them water-rich planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Approximately, one-fifth of the simulated systems fall into this scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Among these, about half are of similar class, one-third are anti-ordered, and the remaining systems have either a mixed or ordered mass ar- chitecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' There exists an almost linear relationship between CV( fice) and µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Using scipy’s linear regression module, we obtain a slope of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='8 and intercept of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='18 between these two quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The correlation coefficient is R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='95, indicating a strong correlation between the Aryabhata’s number and the coefficient of variation of core water mass fraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This suggests a possibility to iden- tify observed exoplanetary systems that may have originated via the Aryabhata formation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' By determining the CV( fice) of a system, the Aryabhata’s number can be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Systems with low µ values probably arose from this scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For systems that fall into the default scenario (positive CS ( fice), implying an increasing core water mass fraction inside- out), the Aryabhata’s number is µ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We note that most sys- tems with µ ⪆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='6 show similarity in their mass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Overall, the intra-system core water-mass-fraction architec- ture of most planetary systems seems to take one of two forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (i) Those characterised by CS ( fice) ≈ CV( fice) ≈ 0 and µ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These systems are composed of water-rich planets wherein the core water mass fraction is similar across the different planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' All surviving planets in these systems started outside the ice line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The Aryabhata formation scenario explains these systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (ii) Those with CS ( fice) > 0 and µ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These systems represent the ‘default’ or common outcome of our simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The plan- etary core water-mass fraction in these systems increases from one planet to another with increasing distance from the host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Some of the surviving planets started from inside the ice line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' At the extreme end, systems in which 60% or more surviving planets started inside the ice line tend to have a similar mass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Summary, conclusions, and future work Paper I of this series introduced a novel, model-independent framework for characterising the architecture of planetary sys- tems at the system level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Planetary-system architectures can be separated into four classes: similar, mixed, anti-ordered, and or- dered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This classification is achieved via two quantities: the co- efficient of similarity and the coefficient of variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The math- ematical CS versus CV architecture space was found to have forbidden regions – regions in which no planetary system can exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In Paper I, the mass architecture classes of observed and synthetic systems were characterised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The mass architecture of synthetic systems was compared with their radii architecture, bulk-density architecture, core-mass architecture, spacing archi- tecture, and water-mass-fraction architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As in Paper I, we identify a system’s architecture with its mass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In this paper, we explore the core-accretion-based formation pathways —around a solar-like star— of the four classes of plan- etary system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We tried to disentangle the role of nature (initial conditions of planet formation) from that of nur- Article number, page 9 of 12 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr ture (physical processes occurring during planet formation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Our findings can be summarised as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' System-level analysis: Our findings show that a system- level analysis of planetary system architecture via our ar- chitecture framework (Paper I) provides an abundance of in- formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We show that planetary formation and evolution process leave their imprint on the entire system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Solid disk mass: The initial amount of solids in the proto- planetary disk in our models plays an important role in decid- ing the architectural fate of a planetary system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disks with a solid mass (initial content of planetesimals) of ≲ 1MJ almost always give rise to systems with similar architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mixed architectures arise most often from disks with solid masses ≈ 1MJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disks with solid mass ≳ 1MJ favour the production of anti-ordered and ordered architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Gas disk mass and metallicity: Initial gas disk mass and stellar metallicity influences the final architecture of a plane- tary system by controlling the initial mass of solids in the disk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Metallicity, in our models, is simply related to the dust-to-gas ratio, which allows us to convert a fraction of the initial gas disk mass into initial dust mass (eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Ap- plying the architecture framework on the synthetic systems from the Bern model allows us to predict the existence of a metallicity–architecture correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The observed correla- tion between metallicity and final architecture is in qualita- tive agreement with the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Metallicity–architecture correlation: The architecture of a planetary system correlates with the metallicity of the host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Most systems hosted by a low-metallicity star (Fe/H < −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2) are of similar architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As the metallicity of the star increases, mixed, ordered, and anti-ordered architectures become increasingly common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Disk lifetime: The occurrence of systems of a similar ar- chitecture around short-lived disks is high, and their fre- quency reduces around long-lived disks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The frequency of anti-ordered architecture increases as disk lifetime increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These correlations are mediated in at least two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' First, disks interact with planets, where orbital migration and ec- centricity and inclination damping occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Due to the ‘mi- gration assisted merger’ correlation, long-lasting disks allow planetary systems to have, in general, more planet–planet mergers and inhibit planetary ejections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These dynamical events shape a system’s final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In addition, in our model, disk lifetimes are correlated with disk masses, which also strongly influences the system architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Dynamical interactions: Planetary systems can signifi- cantly alter their architecture via (at least) three dynamical channels: planet–planet mergers, planetary ejections, and ac- cretion by the host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' All architecture classes in our forma- tion model were found to undergo numerous merger events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Similar systems rely entirely on mergers to shape their final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As the strength of the dynamical interactions experienced by a system (quantified by the number of ejec- tions and/or accretions) increases, the architecture of a sys- tem shifts from mixed to anti-ordered to ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The Aryabhata formation scenario: Systems following this formation scenario have the following formation path- way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' First-generation planets (formed within 1 Myr) are lost mostly via giant impacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Second-generation planets started outside the ice line and migrated inwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The surviving planets are from the second generation and shape the ar- chitecture of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This scenario explains about 20% of simulated systems in which the core water-mass-fraction architecture is different from the default scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Systems following this formation scenario (i) host only those planets that have a high core water-mass fraction and (ii) host only those planets that started outside the ice line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We introduce the Aryabhata’s number to identify those systems that follow this formation scenario and find that 80% of all anti-ordered simulated systems are formed via the Aryabhata formation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Nature versus nurture: Overall, our model suggests that initial conditions —or ‘nature’— dictate whether a system will have a similar architecture or one of the other three ar- chitecture classes, namely mixed, anti-ordered, or ordered (via initial disk mass).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If nature does not allow a system to have a similar mass architecture, then the final architec- ture is controlled by ‘nurture’, or dynamical interactions, among other possible effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As the dynamical interactions increase, the final architecture tends to become mixed, anti- ordered, and then ordered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We would like to offer readers some warning when interpret- ing our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Although the architecture framework (from Pa- per I) is model-independent, the present results hinge critically on the underlying planet formation model – the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' There are several assumptions, simplifications, and choices to be made when simulating synthetic planetary systems using the Bern model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' For example, the treatment of planet–planet merg- ing collisions is relatively simple (Ali-Dib et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We also assume simplified planet-formation conditions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' that is, our star– disk–planet system is isolated enough so that we may ignore the influence of the stellar neighbourhood, stellar flybys, and so on (Bate 2012, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The main strength of this study does not lie in providing an explanation of the formation pathway of any partic- ular system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Instead, our main result is the observation that when groups of planetary systems are identified (architecture classes), general trends in formation pathways emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This allowed us to map the roles of nature and nurture in shaping the final architec- ture of a planetary system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The results of this study can be strengthened or challenged in several observational and theoretical ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We list some pos- sibilities for future studies emerging from this work: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Linking disk mass distribution and architecture occur- rence rates: Our model suggests that there should be a direct relationship between the mass of the solid disk and the final architecture of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' While initial disk masses and the final architecture of the same system will forever remain un- observable, this relation can be tested statistically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' The dis- tribution of initial disk masses and the distribution of final system architecture can be linked by formation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We speculate that in future, when these two distributions become available, formation models can be used to predict one or the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In fact, this problem can also be turned around;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' we can identify the right family of models as those that correctly link the observed distributions of protoplanetary disk masses and architecture occurrence rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We believe such tests are cru- cial for the development and eventual emergence of a stan- dard model for exoplanetary astrophysics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Metallicity–architecture correlation: Our work suggests that the current architecture of a planetary system should be related to the metallicity of its host star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' As both of these are observable, testing this metallicity–architecture correla- tion should be feasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Here, we used a catalogue of 41 ob- served multi-planet systems (from Paper I) to test this corre- lation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We find a qualitative agreement between theory and Article number, page 10 of 12 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Mishra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' : Architecture Framework II – Nature versus nurture: Emergent formation pathways of architecture classes observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, our observational catalogue suffers from incompleteness and low-number statistics, which pre- vents us from making any further assertions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' More obser- vational data are required to confirm or reject the proposed metallicity-architecture correlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' It would also be interest- ing to estimate the current architecture occurrence rate based on the known metallicity distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Confirming formation pathways: Confirming the forma- tion pathways discovered in the present study with obser- vations is challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' However, the strength of our results will increase if different planet-formation models are stud- ied through the architecture framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Hence, one possible line of future work involves repeating the present study using different planet-formation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Extending the architecture framework: So far, we have calibrated our classification scheme for the mass architec- tures only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Calibrating the architecture classification frame- work on other quantities maybe useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Especially for plan- etary radii, which are observable via transit surveys, the use of machine learning methods may be necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Temporal evolution of system architecture: In the nomi- nal Bern model population studied in this paper, protoplane- tary embryos of 100 lunar masses are initialised in the pro- toplanetary disk at the start.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This necessarily implies that all planetary systems start as similar type systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' It would be interesting to inquire whether this is generally true in nature as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' If this is the case, this implies that the ‘default’ ar- chitecture of all planetary systems is similar and the phys- ical processes playing out in the system evolve this archi- tecture into other possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Investigating this may lead to deep insights into the structure of planetary system architec- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' In addition, such studies would be necessary to interpret the observed architecture occurrences, as observed planetary systems are seldom of the same age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' External perturbations: Stellar flybys or multi-planetary systems around binaries provide excellent theoretical and ob- servational laboratories with which to study the influence of external perturbations on the architecture of planetary sys- tems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This problem, when turned around, is also useful in deducing the perturbed or dynamical (or lack of) history of observed planetary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This paper presents new insights obtained by analysing plan- etary systems at the system-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We showed that several pat- terns emerged in the formation pathways of the four architecture classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' These patterns linked the initial conditions of planet for- mation with the final architecture of a system – bridging the vast temporal gap of several billions of years between the birth of planets to their final assembly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' This work has been carried out within the frame of the Na- tional Centre for Competence in Research PlanetS supported by the Swiss Na- tional Science Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' We acknowledge the support of the Swiss National Fund under grant 200020_172746 and 200021_204847 “PlanetsInTime”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' LM ac- knowledges the generous hospitality of the "Planet Formation" workshop by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2094 – 390783311.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Data: The synthetic planetary populations (NGPPS) used in this work are avail- able online at http://dace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='unige.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='ch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Software: Python (Van Rossum & Drake 2009), NumPy (Oliphant 2006), Seaborn (Waskom & the seaborn de- velopment team 2020), Pandas (pandas development team 2020), Matplotlib (Hunter 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' References Adams, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, MNRAS, 488, 1446 Adams, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Batygin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Bloch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Laughlin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, Monthly Notices of the Royal Astronomical Society, 493, 5520 Adibekyan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Santos, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Demangeon, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, Astronomy & Astrophysics, 649, A111 Ali-Dib, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Cumming, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2022, MNRAS, 509, 1413 Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, Astronomy & Astrophysics, 624, A45 Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Carron, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Fortier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013, Astronomy & Astrophysics, 558, A109 Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2004, Astronomy & Astrophysics, 417, L25 Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011, A&A, 526, A63 Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Winisdoerffer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2005, Astronomy & Astrophysics, 434, 343 Armitage, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2010, Astrophysics of Planet Formation Baraffe, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Homeier, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Allard, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Chabrier, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2015, Astronomy & Astro- physics, 577, A42 Bashi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Zucker, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, A&A, 651, A61 Bate, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2012, MNRAS, 419, 3115 Bate, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, MNRAS, 475, 5618 Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ida, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, in Protostars and Planets VI, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Beuther, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Klessen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Dullemond, & T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Henning (Uni- versity of Arizona, Tucson), 691–713 Broeg, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2012, A&A, 538, A90 Burn, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Schlecker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, Astronomy & Astrophysics, 656, A72 Chambers, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1999, Monthly Notices of the Royal Astronomical Society, 304, 793 Ciardi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Fabrycky, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013, The Astrophysical Jour- nal, 763, 41 Clarke, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Gendrin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Sotomayor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2001, Monthly Notices of the Royal Astronomical Society, 328, 485 Coleman, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Nelson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, Monthly Notices of the Royal Astronom- ical Society, 445, 479 Dittkrist, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Klahr, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Henning, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, As- tronomy & Astrophysics, 567 [arXiv:1402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='5969] Emsenhuber, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Burn, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a, Astronomy & Astro- physics, 656, A69 Emsenhuber, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Burn, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021b, Astronomy & Astro- physics, 656, A70 Fabrycky, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Lissauer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ragozzine, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, The Astrophysical Journal, 790, 146 Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Margot, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013, The Astrophysical Journal, 767, 115 Fortier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Carron, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Dittkrist, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2013, Astronomy & Astrophysics, 549, A44 Garaud, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011, The Astrophysical Journal Letters, Volume 728, Issue 2, article id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' L30, 5 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', 728, L30 Gilbert, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Fabrycky, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, The Astronomical Journal, 159, 281 Gladman, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1993, Icarus, 106, 247 He, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Ragozzine, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, Monthly Notices of the Royal Astronomical Society, 490, 4575 He, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Ragozzine, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, AJ, 161, 16 Hueso, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Guillot, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2005, Astronomy & Astrophysics, 442, 703 Hunter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2007, Computing in science & engineering, 9, 90 Jin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Parmentier, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, ApJ, 795, 65 Kipping, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, Monthly Notices of the Royal Astronomical Society, 473, 784 Kokubo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Ida, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1998, Icarus, 131, 171 Kokubo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Ida, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2002, The Astrophysical Journal, 581, 666 Laskar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1997, Large scale chaos and the spacing of the inner planets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Laskar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2000, Physical Review Letters, 84, 3240 Laskar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Petit, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017, Astronomy & Astrophysics, 605, 1 Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Bodenheimer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Richardson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1996, Nature, Volume 380, Issue 6575, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 606-607 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', 380, 606 Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Ida, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1997, The Astrophysical Journal, Volume 477, Issue 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 781-791.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', 477, 781 Lissauer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ragozzine, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Fabrycky, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011, The Astrophysical Journal Supplement Series, 197, 8 Lodders, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2003, The Astrophysical Journal, 591, 1220 Lynden-Bell, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Pringle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1974, Monthly Notices of the Royal Astronom- ical Society, 168, 603 Manara, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Testi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, Astronomy & Astrophysics, 631, L2 Marboeuf, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Thiabaud, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Cabral, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014a, Astron- omy and Astrophysics, 570 [arXiv:1407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='7282] Marboeuf, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Thiabaud, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Cabral, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014b, Astron- omy and Astrophysics, 570 [arXiv:1407.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='7271] Matsuyama, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Johnstone, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Murray, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2003, The Astrophysical Journal, 585, L143 Mayor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Queloz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1995, Nature, 378, 355 Article number, page 11 of 12 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44705corr Millholland, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Laughlin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017, The Astrophysical Journal, 849, L33 Millholland, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Winn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, ApJ, 920, L34 Mishra, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Leleu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, Astronomy & Astrophysics, 656, A74 Mishra, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Udry, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, in EPSC-DPS Joint Meeting 2019, held 15-20 September 2019 in Geneva, Switzerland, id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' EPSC-DPS2019-1616, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, EPSC–DPS2019–1616 Mishra, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Udry, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2023, Astronomy & Astro- physics Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, in Handbook of Exoplanets, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Deeg & J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' Bel- monte, 143 Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Benz, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2009, Astronomy & Astrophysics, 501, 1139 Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Georgy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2012a, Astronomy & Astrophysics, 547, A112 Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Klahr, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Henning, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2012b, Astronomy & Astro- physics, 547, A111 Mulders, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', O’brien, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ciesla, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Apai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Pascucci, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020 Mulders, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Pascucci, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Ciesla, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Fernandes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021 [arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='12520] Nakamoto, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Nakagawa, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1994, The Astrophysical Journal, 421, 640 Obertas, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Van Laerhoven, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Tamayo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2017, Icarus [arXiv:1703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='08426] Oliphant, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2006, A guide to NumPy, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1 (Trelgol Publishing USA) Paardekooper, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Baruteau, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Kley, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2011, Monthly Notices of the Royal Astronomical Society, 410, 293 pandas development team, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, pandas-dev/pandas: Pandas Petigura, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Marcy, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Winn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, The Astronomical Journal, 155, 89 Petit, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Laskar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Boué, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, Astronomy & Astrophysics, 617, A93 Pollack, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Hubickyj, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Bodenheimer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1996, Icarus, 124, 62 Pu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2015, The Astrophysical Journal, Volume 807, Issue 1, article id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 44, 10 pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', 807, 44 Sandford, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Kipping, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Collins, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, Monthly Notices of the Royal Astronomical Society, Volume 505, Issue 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='2224-2246, 505, 2224 Santos, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Israelian, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mayor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2005, Astronomy & Astrophysics, 437, 1127 Sarkis, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Henning, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Marleau, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Mollière, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, A&A, 645, A79 Schib, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Wenger, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Marleau, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Helled, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021, A&A, 645, A43 Schlecker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Mordasini, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Emsenhuber, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021a, Astronomy and As- trophysics, 656, A71 Schlecker, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Pham, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Burn, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2021b, Astronomy and Astrophysics, 656, A73 Shakura, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Sunyaev, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 1973, Astronomy & Astrophysics, 24, 337 Tamayo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Gilbertson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Foreman-Mackey, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, Stability constrained characterization of multiplanet systems Thiabaud, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Marboeuf, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Alibert, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2014, Astronomy & Astrophysics, 562 [arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='3085] Tremaine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2015, Astrophysical Journal, 807, 157 Turrini, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Zinzi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Belinchon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, Astronomy and Astrophysics, 636 [arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='05366] Udry, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Santos, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2007, Annual Review of Astronomy and Astrophysics, 45, 397 Van Rossum, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Drake, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2009, Python 3 Reference Manual (Scotts Valley, CA: CreateSpace) Veras, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & Armitage, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2004, Monthly Notices of the Royal Astronomical Society, 347, 613 Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', lin Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', yao Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2019, Monthly Notices of the Royal Astronomical Society, 490, 359 Waskom, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' & the seaborn development team.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, mwaskom/seaborn Weiss, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Marcy, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Petigura, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2018, The Astronomical Jour- nal, 155, 48 Winter, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Kruijssen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Longmore, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Chevance, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, Nature, 586, 528 Yeh, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', Jiang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=', & Gajendran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content=' 2020, Astrophysics and Space Science, 365 [arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} +page_content='09431] Article number, page 12 of 12' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dE0T4oBgHgl3EQfdQAz/content/2301.02373v1.pdf'} diff --git a/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/2301.00030v1.pdf.txt b/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/2301.00030v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3a22078ade2e5ac2976a6d16f4910ac6d00852ea --- /dev/null +++ b/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/2301.00030v1.pdf.txt @@ -0,0 +1,642 @@ +arXiv:2301.00030v1 [math-ph] 28 Dec 2022 +Duality family of KdV equation +Xin Gu,a Yuan-Yuan Liu,b Wen-Du Li,c,1 and Wu-Sheng Daia,2 +aDepartment of Physics, Tianjin University, Tianjin 300350, P.R. China +bTheoretical Physics Division, Chern Institute of Mathematics, Nankai University, PR China +cCollege of Physics and Materials Science, Tianjin Normal University, Tianjin 300387, PR China +Abstract: It is revealed that there exist duality families of the KdV type equation. The +duality family consists of an infinite number of the generalized KdV (GKdV) equation. +A duality transformation relates the GKdV equations in a duality family. Once a family +member is solved, the duality transformation presents the solutions of all other family +members. We show some dualities as examples, such as the soliton solution-soliton solution +duality and the periodic solution-soliton solution duality. +1liwendu@tjnu.edu.cn +2daiwusheng@tju.edu.cn + +Contents +1 +Introduction +1 +2 +Duality family of GKdV equation +3 +3 +Duality family of KdV equation: Example +5 +4 +Conclusion +7 +1 +Introduction +After Russell found the solitary wave phenomenon, and studying nonlinear evolution equa- +tions began in physics and mathematics [1]. When Kortoweg and de Vries studied the water +wave in the long-wave approximation and finite small amplitude, they gave the Korteweg-de +Vries (KdV) equation [1–3], +∂u +∂t − 6u∂u +∂x + ∂3u +∂x3 = 0. +(1.1) +The KdV equation is a basic model in nonlinear evolution equations [4, 5]. +The KdV +equation defines many physical phenomena, such as waves in anharmonic crystals [6], waves +in bubble liquid mixtures [7], ion acoustic waves [8–10], and waves in warm plasma [8–10]. +Soliton solution. The solitary wave solutions of the KdV equation are noted as solitons. +The velocity of the solitary wave relates to its magnitude [11], and after the collision, it re- +tains the original magnitude, shape, and velocity [12, 13]. The theory of solitons emerges in +biochemistry, nonlinear optics, mathematical biosciences, fluid dynamics, plasma physics, +nuclear physics, and geophysics [14]. There have been many approaches to calculating the +soliton solution [15, 16], such as the Painlevé analysis method, Bäcklund transformation +method, Hirota bilinear method, inverse scattering method, and Darboux transformation +method [1]. These methods apply not only to calculating the soliton solution of the KdV +equation but also to other partial differential equations [17]. These methods have differ- +ent limits in applications, and there is no universal method for solving nonlinear partial +differential equations generally [18]. +Modified KdV (mKdV) equation and generalized KdV (GKdV) equation. +The KdV +equation is a special case of the GKdV equation. The GKdV equation generally is [19] +∂u +∂t − f (u) ∂u +∂x + ∂3u +∂x3 = 0. +(1.2) +The GKdV equation recovers the KdV equation (1.1) when f (u) = 6u. +A special GKdV equation with f (u) = −αuk is the KdV type equation with a power- +law nonlinearity [20], +∂u +∂t + αuk ∂u +∂x + ∂3u +∂x3 = 0, +(1.3) +– 1 – + +and the mKdV equation is Eq. (1.3) with k = 2 and α = 6 [21]. The Miura transforma- +tion establishes a one-to-one correspondence between the solutions of the KdV equation +and the solutions of the mKdV equation [22]. The mKdV equation has a rich physical +background [23, 24]. The mKdV equation can describe a bounded particle propagating in +a one-dimensional nonlinear lattice with a harmonic force [25], small amplitude ion acous- +tic waves propagating in plasma physics [8], and the thermal pulse propagating through a +single crystal of sodium fluoride [26, 27]. +Duality and duality family. Newton in Principia revealed a duality between gravitation +and elasticity in classical mechanics, now called the Newton-Hooke duality [28]. E. Kasner +and V.I. Arnol’d independently find the generalized duality between power potentials: two +power potentials U (r) = ξra and V (r) = ηrA are dual if a+2 +2 += +2 +A+2, called the Kasner- +Arnol’d theorem [29–31]. +Recently, we find that such a duality generally exists in classical mechanics, quantum +mechanics, and scalar fields and present the duality among arbitrary potentials [32]. We +find that the duality is not a duality only between two potentials but exists duality families +[32]. Each duality family consists of an infinite number of potentials dual to each other. +Each duality family consists of an infinite number of potentials; in a duality family, every +potential is dual to all other potentials. Once a family member’s solution is obtained, we can +obtain all other members’ solutions by the duality transformation. Therefore, the duality +relation can be used to find the solutions for classical mechanics, quantum mechanics, field +theory, and nonlinear equations (such as the Gross-Pitaevskii equation) [33–35]. The duality +can also be used to classify long-range potentials in quantum mechanics [36]. +In this paper, we reveal duality and duality families for the GKdV equation. +The +duality transformation can transform the solution of a GKdV equation into the solution of +its dual GKdV equation. The GKdV equation duality family consists of an infinite number +of GKdV equations that are dual to each other. The solution of all GKdV equations in a +duality family can be obtained from the solution of one solved family member by the duality +transformation. This way, we can obtain a series of exact solutions of GKdV equations. +As an example, we discuss the KdV equation duality family in which the KdV equation +is a member. As an example, we discuss the KdV equation duality family in which the +KdV equation (1.1) and the KdV type equation with a power-law nonlinearity (1.3) are +family members. The duality transformation gives a series of 1-soliton solutions of GKdV +equations from a 1-soliton solution of the KdV equation (1.1). We also consider the duality +between the periodic solution of the KdV equation and the soliton solution of the mKdV +equation. +In particular, since the solution of all GKdV equations in a duality family can be +obtained from the solution of one family member by the duality transformation, we can +develop an indirect approach for solving GKdV equations: (1) constructing the duality +family of this equation; (2) looking for an ‘easy’ equation in the duality family and solving +the ‘easy’ equation; (3) solving the wanted equation by the duality transformation. +In section 2, we present the duality and duality family of the GKdV equation. In section +3, we consider two examples: (1) solving the KdV equation with a power-law nonlinearity +from the KdV equation by the duality transformation; (2) the duality between the periodic +– 2 – + +solution of the KdV equation and the soliton solution of the mKdV equation. The conclusion +is given in section 4. +2 +Duality family of GKdV equation +In this section, we give the duality and duality family of the traveling wave GKdV equation. +The solutions of a GKdV equation can be obtained from its dual equation by the duality +transformation. +The traveling wave with a velocity C of the GKdV equation (1.2) is given by +d3u +dz3 + [C − f (u)] du +dz = 0. +(2.1) +where u (x, t) = u (z) and z = x + Ct. +The traveling wave GKdV equation (2.1) has the following duality relation. +Two traveling wave GKdV equations, +d3u +dz3 + [C − f (u)] du +dz = 0, +(2.2) +d3v +dζ3 + [C − g (v)] dv +dζ = 0, +(2.3) +if +1 +C u−2 [G − U (u) − Fu] = 1 +C v−2 [G − V (v) − Fv] , +(2.4) +where +d2U (u) +du2 += −f (u) , +(2.5) +d2V (v) +dv2 += −g (v) , +(2.6) +F = − +�d2u +dz2 + Cu + dU (u) +du +� +, +(2.7) +F = − +�d2v +dζ2 + Cv + dV (v) +dv +� +, +(2.8) +G = 1 +2 +�du +dz +�2 ++ 1 +2Cu2 + U (u) + Fu, +(2.9) +G = 1 +2 +�dv +dζ +�2 ++ 1 +2Cv2 + V (v) + Fv, +(2.10) +then their solutions satisfy +u ↔ vσ, +(2.11) +z ↔ +� +C +C σζ. +(2.12) +– 3 – + +Here σ is an arbitrarily chosen constant. +Integral of motion. Before going on, we first illustrate the meaning of G, F, G, and F, +taking G and F as examples. +Broadly speaking, G and F are both integrals of motion for the equation of motion (2.2). +In principle, the integral of the equation of motion over time is known as the integral of +motion. Here G and F are integration constants of integrating the traveling wave equation +(2.2) over z and u, respectively; we here still call them integral of motion. +Multiplying both sides of the GKdV equation (2.2) by dz and integrating, and using +(2.5) give d2u +dz2 + Cu + dU(u) +du += −F, i.e., Eq. (2.7), where F is the integration constant of +the integral over z. +Similarly, multiplying both sides of (2.7) by du and integrating give 1 +2 +� du +dz +�2 + 1 +2Cu2 + +U (u) + Fu = G, i.e., (2.9), where G is the integration constant of the integral over u and +� +du d2u +dz2 = +� +dz du +dz +d2u +dz2 = 1 +2 +� +dz d +dz +�du +dz +�2 = 1 +2 +� du +dz +�2 is used. +Proof of duality relation. Substituting the duality transformations (2.11) and (2.12) +into (2.7) gives +C +C +d2v +dζ2 + C +C (σ − 1) v−1 +�dv +dζ +�2 ++ σCv + v2(1−σ) dU (vσ) +dv ++ σv1−σF = 0. +(2.13) +By (2.9), we have +C +C (σ − 1) v−1 +�dv +dζ +�2 += 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] − C (σ − 1) v. +(2.14) +Using (2.14) to eliminate the term (σ − 1) v−1 � +dv +dζ +�2 +in (2.13), we arrive at +C +C +d2v +dζ2 + Cv + 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + v2(1−σ) dU (vσ) +dv ++ σv1−σF = 0. (2.15) +By the duality transformation (2.4), we can obtain +V (v) = G − Fv − C +C v2−2σ [G − U (vσ) − Fvσ] . +(2.16) +Taking the derivative of (2.16) with respect to v gives +dV (v) +dv += −F + 2 C +C (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + C +C v2(1−σ) +�dU (vσ) +dv ++ σvσ−1F +� +. +(2.17) +Substituting (2.17) into (2.15) gives +d2v +dζ2 + Cv + dV (v) +dv ++ F = 0. +(2.18) +Then taking the derivative with respect to ζ and using (2.6), we arrive at (2.3). +Discussion of U. The relation between f (u) in the GKdV equation (2.2) and U (u) in +(2.5) is not unique. U (u; a, b) = U (u) + au + b and U (u) lead to the same f (u), and both +correspond to the GKdV equation (1.2). +– 4 – + +The integral of motion F, corresponding to U (u; a, b), by (2.7), is F (a, b) = − +� +d2u +dz2 + Cu + dU(u;a,b) +du +� += +F − a; the integral of motion G, corresponding to U (u; a, b) , by (2.9), is G (a, b) = +1 +2 +�du +dz +�2 + 1 +2Cu2 + U (u; a, b) + F (a, b) u = G + b. Therefore, by (2.4), the duality transfor- +mation given by U (u; a, b) is +1 +C u−2 [G (a, b) − U (u; a, b) − F (a, b) u] = 1 +C v−2 [G − V (v; a, b) − Fv] . +(2.19) +Here V (v; a, b) is the duality of U (u; a, b). +Substituting U (u; a, b), F (a, b), and G (a, b) into the duality transformation (2.19) +gives +V (v; a, b) = G − Fv − C +C v2−2σ [G − U (vσ) − Fvσ] = V (v) . +(2.20) +That is, in the GKdV equation, although the correspondence between f (u) and U (u) is +not unique, the same f (u) corresponding to different U (u), the choice of U (u) does not +influence the duality of the GKdV equation. +3 +Duality family of KdV equation: Example +We consider a special duality family of the GKdV equation as an example in this section. +The KdV equation and mKdV equation are family members of this duality family. The +solutions of all family members in a duality family are related by a duality transformation. +In a duality family containing the KdV equation, we can solve all the GKdV equations +in the family from the solution of the KdV equation by the duality transformation. In +this section, we give the solution of the KdV equation with a power-law nonlinearity from +the solution of the KdV equation; the mKdV equation is the power-law nonlinearity KdV +equation with power 2. +Duality family of the KdV equation and the KdV equation with a power-law nonlinearity. +The KdV equation (1.1) with z = x − Ct, +d3u +dz3 − (C + 6u) du +dz = 0, +(3.1) +has a 1-soliton solution [37] +u (z) = −C +2 sech2 +�√ +C +2 z +� +. +(3.2) +The soliton solution is a localized traveling wave solution. The localization solution, taking +the 1-soliton solution as an example, means that (3.2) when z → ±∞, u (z) → 0. The +integral of motion of the 1-soliton solution (3.2), by (2.7), (2.9) and (3.2), is +F = 0 and G = 0. +(3.3) +Then the dual equation of the traveling wave KdV equation given by the duality transfor- +mation (2.4) is +d3v +dζ3 − +� +C + C +C (2 + σ) (1 + σ) vσ +� dv +dζ = 0. +(3.4) +– 5 – + +Since σ can be chosen arbitrarily, (3.4) is not a single equation but forms a duality family. +All the GKdV equations labeled by different σ in the duality family are dual equations of +the KdV equation. +By (2.11) and (2.12), we can obtain the solution of Eq. (3.4) +v (ζ) = +� +−C +2 sech2 +�√ +C +2 σζ +��1/σ +, +(3.5) +where ζ = x − Ct has a velocity −C. +Instead of z, rewrite the dual equation (3.4) by (t, x): +∂v +∂t + αvσ ∂v +∂x + ∂3v +∂x3 = 0, +(3.6) +where α = − C +C (2 + σ) (1 + σ). When σ is taken as a positive integer, (3.6) is the KdV +equation with a power-law nonlinearity, and the solution (3.5) becomes +v (x, t) = +� +−C +2 sech2 +�√ +C +2 σ (x − Ct) +��1/σ +, +(3.7) +or equivalently, v (x, t) = +� +C(2+σ)(1+σ) +2α cosh2� √ +C +2 σ(x−Ct) +� +�1/σ +, which agrees with Ref. [38]. +In this duality family, the family member σ = 1 is the KdV equation (1.1), and the +family member σ = 2 is the mKdV equation +∂v +∂t − 12 C +C v2 ∂v +∂x + ∂3v +∂x3 = 0. +(3.8) +(3.7) with σ = 2 gives the 1-soliton solution of the mKdV equation (3.8) +v (x, t) = ± +� +−C +2 sech +�√ +C (x − Ct) +� +. +(3.9) +Now, by the duality relation, we have obtained all family members’ solutions from the KdV +equation’s solution. +Periodic solution-soliton solution duality. A duality exists between the periodic solution +and the soliton solution of the GKdV equation. We take the periodic solution of the KdV +equation and the soliton solution of the mKdV equation as an example. +The KdV equation (1.1) has a periodic solution +u (x, t) = 1 +6C +� +1 + 3 tan2 +�√ +C +2 +(x − Ct) +�� +. +(3.10) +The KdV equation (1.1) with z = x − Ct becomes (3.1), and its solution (3.10) becomes +u (z) = C +6 +� +1 + 3 tan2 +�C +2 z +�� +(3.11) +– 6 – + +with the period +2π +√ +C . +The integral of motion of the periodic solution (3.10) of the KdV equation, by (2.7), +(2.9) and (3.10), is +F = 0, +G = −C3 +54 . +(3.12) +The dual equation of the traveling wave KdV equation given by the duality transformation +(2.4) is then +d3v +dζ3 + +� +C − 1 +27 (1 − σ) (1 − 2σ) CC2v−2σ + C +C (σ + 1) (σ + 2) vσ +� dv +dζ = 0, +(3.13) +where ζ = x+Ct. The duality transformations (2.11) and (2.12) give the solution of (3.13). +v (ζ) = +� +C +6 +� +1 − 3 tanh2 +�√ +C +2 σζ +���1/σ +. +(3.14) +σ running over all possible values gives all equations and their solutions in the duality +family. +The family member σ = 1 and C = −C in the duality family is the KdV equation +(1.1). Different from the 1-soliton solution (3.4), however, the family member σ = −1 is +the traveling wave mKdV equation +d3v +dζ3 + C +� +1 − 2 +9C2v2 +� dv +dζ = 0. +(3.15) +or, with ζ = x + Ct and C = 27 +C2 , +∂v +∂t − 6v2 ∂v +∂x + ∂3v +∂x3 = 0, +(3.16) +which, by (3.14), has a traveling wave solution +v (x, t) = +2 +√ +C +√ +3 +� +1 − 3 tanh2 � √ +C +2 (x + Ct) +��. +(3.17) +It can be directly verified that v (x, t) → − +√ +3C +3 +when x, t → ±∞, so (3.17) is a soliton +solution of the mKdV equation (3.16). +In this example, the duality of the periodic solution is a soliton solution. +Indirect approach for solving equations. The existence of the duality family gives us +an indirect approach to solving equations. When solving an equation, we can (1) find its +duality family; (2) look for and solve an ‘easy’ family member, and (3) achieve the solution +of this equation by the duality transformation. +4 +Conclusion +This paper reveals a duality among the GKdV equations, and all the GKdV equations that +are dual to each other form a duality family. In a duality family, the solutions of different +family members are related by the duality transformation. +– 7 – + +In a duality family, we only need to solve one family member, and the duality trans- +formation can give solutions for all other family members. This allows us to develop an +indirect approach to solving the GKdV equation. +In this paper, as an example, we discuss the GKdV equation duality family containing +the KdV equation and the KdV equation with a power-law nonlinearity: seeking 1-soliton +solution of the KdV equation with a power-law nonlinearity from a 1-soliton solution of +the KdV equation by the duality relation. In another example, we consider the periodic +solution-soliton solution duality. By the duality transformation, we give a soliton solution +of the mKdV equation from a periodic solution of the KdV equation. +Acknowledgments +We are very indebted to Dr G. Zeitrauman for his encouragement. This work is supported +in part by Special Funds for theoretical physics Research Program of the NSFC under Grant +No. 11947124, and NSFC under Grant Nos. 11575125 and 11675119. +References +[1] M. J. Ablowitz, M. Ablowitz, P. Clarkson, and P. A. Clarkson, Solitons, nonlinear evolution +equations and inverse scattering, vol. 149. Cambridge university press, 1991. +[2] D. Kordeweg and G. de Vries, On the change of form of long waves advancing in a +rectangular channel, and a new type of long stationary wave, Philos. Mag 39 (1895) 422–443. +[3] D. H. Peregrine, Calculations of the development of an undular bore, Journal of Fluid +Mechanics 25 (1966), no. 2 321–330. +[4] S. B. G. Karakoc and K. K. Ali, New exact solutionsand numerical approximations of the +generalized kdv equation, . +[5] A. Silem, H. Wu, and D.-j. Zhang, Nonisospectral effects on generating localized waves, +Communications in Theoretical Physics 73 (2021), no. 11 115002. +[6] N. J. Zabusky, A synergetic approach to problems of nonlinear dispersive wave propagation +and interaction, in Nonlinear partial differential equations, pp. 223–258. Elsevier, 1967. +[7] L. Van Wijngaarden, On the equations of motion for mixtures of liquid and gas bubbles, +Journal of fluid mechanics 33 (1968), no. 3 465–474. +[8] K. Konno and Y. H. Ichikawa, A modified korteweg de vries equation for ion acoustic waves, +Journal of the Physical Society of Japan 37 (1974), no. 6 1631–1636. +[9] F. Haas, L. Garcia, J. Goedert, and G. Manfredi, Quantum ion-acoustic waves, Physics of +Plasmas 10 (2003), no. 10 3858–3866. +[10] H. Schamel, A modified korteweg-de vries equation for ion acoustic wavess due to resonant +electrons, Journal of Plasma Physics 9 (1973), no. 3 377–387. +[11] L. D. Faddeev and V. E. Korepin, Quantum theory of solitons, Physics Reports 42 (1978), +no. 1 1–87. +[12] A. Korkmaz, Numerical algorithms for solutions of korteweg–de vries equation, Numerical +methods for partial differential equations 26 (2010), no. 6 1504–1521. +– 8 – + +[13] G. L. Lamb Jr, Elements of soliton theory, New York (1980) 29. +[14] A. Biswas, 1-soliton solution of the k (m, n) equation with generalized evolution, Physics +Letters A 372 (2008), no. 25 4601–4602. +[15] M. Wang, Y. Zhou, and Z. Li, Application of a homogeneous balance method to exact +solutions of nonlinear equations in mathematical physics, Physics Letters A 216 (1996), +no. 1-5 67–75. +[16] N. Kudryashov, Exact soliton solutions of the generalized evolution equation of wave +dynamics, Journal of applied mathematics and mechanics 52 (1988), no. 3 361–365. +[17] I. Dorfman, Dirac structures and integrability of nonlinear evolution equations, vol. 18. +Wiley, 1993. +[18] P. G. Drazin and R. S. Johnson, Solitons: an introduction, vol. 2. Cambridge university +press, 1989. +[19] M. M. Melo, Generalized solutions to the gkdv equation., Electronic Journal of Differential +Equations (EJDE)[electronic only] 2010 (2010) Paper–No. +[20] A.-M. Wazwaz, New sets of solitary wave solutions to the kdv, mkdv, and the generalized kdv +equations, Communications in Nonlinear Science and Numerical Simulation 13 (2008), no. 2 +331–339. +[21] D.-J. Zhang, S.-L. Zhao, Y.-Y. Sun, and J. Zhou, Solutions to the modified korteweg–de vries +equation, Reviews in Mathematical Physics 26 (2014), no. 07 1430006. +[22] R. M. Miura, C. S. Gardner, and M. D. Kruskal, Korteweg-de vries equation and +generalizations. ii. existence of conservation laws and constants of motion, Journal of +Mathematical physics 9 (1968), no. 8 1204–1209. +[23] D.-j. Zhang, Wronskian solutions of integrable systems, in Nonlinear Systems and Their +Remarkable Mathematical Structures, pp. 415–444. Chapman and Hall/CRC, 2019. +[24] S.-l. Zhao and D.-j. Zhang, Rational solutions to q3δ in the adler-bobenko-suris list and +degenerations, Journal of nonlinear mathematical physics 26 (2019), no. 1 107–132. +[25] M. Wadati, Wave propagation in nonlinear lattice. i, Journal of the Physical Society of Japan +38 (1975), no. 3 673–680. +[26] V. Narayanamurti and C. Varma, Nonlinear propagation of heat pulses in solids, Physical +Review Letters 25 (1970), no. 16 1105. +[27] F. Tappert and C. Varma, Asymptotic theory of self-trapping of heat pulses in solids, +Physical Review Letters 25 (1970), no. 16 1108. +[28] S. Chandrasekhar, Newton’s Principia for the common reader. Oxford University Press, 2003. +[29] V. I. Arnold, Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis +and catastrophe theory from evolvents to quasicrystals. Springer Science & Business Media, +1990. +[30] T. Needham, Visual complex analysis. Oxford University Press, 1998. +[31] V. I. Arnol’d, Mathematical methods of classical mechanics, vol. 60. Springer Science & +Business Media, 2013. +[32] W.-D. Li and W.-S. Dai, Duality family of scalar field, Nuclear Physics B 972 (2021) 115569. +– 9 – + +[33] S.-L. Li, Y.-J. Chen, Y.-Y. Liu, W.-D. Li, and W.-S. Dai, Solving eigenproblem by duality +transform, Annals of Physics 443 (2022) 168962. +[34] Y.-J. Chen, S.-L. Li, W.-D. Li, and W.-S. Dai, An indirect approach for quantum-mechanical +eigenproblems: duality transforms, Communications in Theoretical Physics 74 (2022), no. 5 +055103. +[35] Y.-Y. Liu, W.-D. Li, and W.-S. Dai, Exactly solvable gross–pitaevskii type equations, Journal +of Physics Communications 5 (2021), no. 1 015011. +[36] W.-D. Li and W.-S. Dai, Long-range potential scattering: Converting long-range potential to +short-range potential by tortoise coordinate, Journal of Mathematical Physics 62 (2021), +no. 12 122102. +[37] G. Griffiths and W. E. Schiesser, Traveling wave analysis of partial differential equations: +numerical and analytical methods with MATLAB and Maple. Academic Press, 2010. +[38] M. Hayek, Constructing of exact solutions to the kdv and burgers equations with power-law +nonlinearity by the extended g’ g-expansion method, Applied Mathematics and Computation +217 (2010), no. 1 212–221. +10 + diff --git a/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/load_file.txt b/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..a823632578bdaa0d870a706d14c0fb14be147c93 --- /dev/null +++ b/2dAyT4oBgHgl3EQfPvZl/content/tmp_files/load_file.txt @@ -0,0 +1,471 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf,len=470 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='00030v1 [math-ph] 28 Dec 2022 Duality family of KdV equation Xin Gu,a Yuan-Yuan Liu,b Wen-Du Li,c,1 and Wu-Sheng Daia,2 aDepartment of Physics, Tianjin University, Tianjin 300350, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' China bTheoretical Physics Division, Chern Institute of Mathematics, Nankai University, PR China cCollege of Physics and Materials Science, Tianjin Normal University, Tianjin 300387, PR China Abstract: It is revealed that there exist duality families of the KdV type equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The duality family consists of an infinite number of the generalized KdV (GKdV) equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' A duality transformation relates the GKdV equations in a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Once a family member is solved, the duality transformation presents the solutions of all other family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' We show some dualities as examples, such as the soliton solution-soliton solution duality and the periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1liwendu@tjnu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='cn 2daiwusheng@tju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='cn Contents 1 Introduction 1 2 Duality family of GKdV equation 3 3 Duality family of KdV equation: Example 5 4 Conclusion 7 1 Introduction After Russell found the solitary wave phenomenon, and studying nonlinear evolution equa- tions began in physics and mathematics [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' When Kortoweg and de Vries studied the water wave in the long-wave approximation and finite small amplitude, they gave the Korteweg-de Vries (KdV) equation [1–3], ∂u ∂t − 6u∂u ∂x + ∂3u ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) The KdV equation is a basic model in nonlinear evolution equations [4, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The KdV equation defines many physical phenomena, such as waves in anharmonic crystals [6], waves in bubble liquid mixtures [7], ion acoustic waves [8–10], and waves in warm plasma [8–10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Soliton solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The solitary wave solutions of the KdV equation are noted as solitons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The velocity of the solitary wave relates to its magnitude [11], and after the collision, it re- tains the original magnitude, shape, and velocity [12, 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The theory of solitons emerges in biochemistry, nonlinear optics, mathematical biosciences, fluid dynamics, plasma physics, nuclear physics, and geophysics [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' There have been many approaches to calculating the soliton solution [15, 16], such as the Painlevé analysis method, Bäcklund transformation method, Hirota bilinear method, inverse scattering method, and Darboux transformation method [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' These methods apply not only to calculating the soliton solution of the KdV equation but also to other partial differential equations [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' These methods have differ- ent limits in applications, and there is no universal method for solving nonlinear partial differential equations generally [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Modified KdV (mKdV) equation and generalized KdV (GKdV) equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The KdV equation is a special case of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The GKdV equation generally is [19] ∂u ∂t − f (u) ∂u ∂x + ∂3u ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) The GKdV equation recovers the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) when f (u) = 6u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' A special GKdV equation with f (u) = −αuk is the KdV type equation with a power- law nonlinearity [20], ∂u ∂t + αuk ∂u ∂x + ∂3u ∂x3 = 0, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3) – 1 – and the mKdV equation is Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3) with k = 2 and α = 6 [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The Miura transforma- tion establishes a one-to-one correspondence between the solutions of the KdV equation and the solutions of the mKdV equation [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The mKdV equation has a rich physical background [23, 24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The mKdV equation can describe a bounded particle propagating in a one-dimensional nonlinear lattice with a harmonic force [25], small amplitude ion acous- tic waves propagating in plasma physics [8], and the thermal pulse propagating through a single crystal of sodium fluoride [26, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Duality and duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Newton in Principia revealed a duality between gravitation and elasticity in classical mechanics, now called the Newton-Hooke duality [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Kasner and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Arnol’d independently find the generalized duality between power potentials: two power potentials U (r) = ξra and V (r) = ηrA are dual if a+2 2 = 2 A+2, called the Kasner- Arnol’d theorem [29–31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Recently, we find that such a duality generally exists in classical mechanics, quantum mechanics, and scalar fields and present the duality among arbitrary potentials [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' We find that the duality is not a duality only between two potentials but exists duality families [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Each duality family consists of an infinite number of potentials dual to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Each duality family consists of an infinite number of potentials;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' in a duality family, every potential is dual to all other potentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Once a family member’s solution is obtained, we can obtain all other members’ solutions by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Therefore, the duality relation can be used to find the solutions for classical mechanics, quantum mechanics, field theory, and nonlinear equations (such as the Gross-Pitaevskii equation) [33–35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The duality can also be used to classify long-range potentials in quantum mechanics [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In this paper, we reveal duality and duality families for the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The duality transformation can transform the solution of a GKdV equation into the solution of its dual GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The GKdV equation duality family consists of an infinite number of GKdV equations that are dual to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The solution of all GKdV equations in a duality family can be obtained from the solution of one solved family member by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' This way, we can obtain a series of exact solutions of GKdV equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' As an example, we discuss the KdV equation duality family in which the KdV equation is a member.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' As an example, we discuss the KdV equation duality family in which the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) and the KdV type equation with a power-law nonlinearity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3) are family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The duality transformation gives a series of 1-soliton solutions of GKdV equations from a 1-soliton solution of the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' We also consider the duality between the periodic solution of the KdV equation and the soliton solution of the mKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In particular, since the solution of all GKdV equations in a duality family can be obtained from the solution of one family member by the duality transformation, we can develop an indirect approach for solving GKdV equations: (1) constructing the duality family of this equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2) looking for an ‘easy’ equation in the duality family and solving the ‘easy’ equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3) solving the wanted equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In section 2, we present the duality and duality family of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In section 3, we consider two examples: (1) solving the KdV equation with a power-law nonlinearity from the KdV equation by the duality transformation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2) the duality between the periodic – 2 – solution of the KdV equation and the soliton solution of the mKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The conclusion is given in section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 2 Duality family of GKdV equation In this section, we give the duality and duality family of the traveling wave GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The solutions of a GKdV equation can be obtained from its dual equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The traveling wave with a velocity C of the GKdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) is given by d3u dz3 + [C − f (u)] du dz = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) where u (x, t) = u (z) and z = x + Ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The traveling wave GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) has the following duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Two traveling wave GKdV equations, d3u dz3 + [C − f (u)] du dz = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) d3v dζ3 + [C − g (v)] dv dζ = 0, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3) if 1 C u−2 [G − U (u) − Fu] = 1 C v−2 [G − V (v) − Fv] , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) where d2U (u) du2 = −f (u) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='5) d2V (v) dv2 = −g (v) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='6) F = − �d2u dz2 + Cu + dU (u) du � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7) F = − �d2v dζ2 + Cv + dV (v) dv � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='8) G = 1 2 �du dz �2 + 1 2Cu2 + U (u) + Fu, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9) G = 1 2 �dv dζ �2 + 1 2Cv2 + V (v) + Fv, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='10) then their solutions satisfy u ↔ vσ, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='11) z ↔ � C C σζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='12) – 3 – Here σ is an arbitrarily chosen constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Before going on, we first illustrate the meaning of G, F, G, and F, taking G and F as examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Broadly speaking, G and F are both integrals of motion for the equation of motion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In principle, the integral of the equation of motion over time is known as the integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Here G and F are integration constants of integrating the traveling wave equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) over z and u, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' we here still call them integral of motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Multiplying both sides of the GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) by dz and integrating, and using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='5) give d2u dz2 + Cu + dU(u) du = −F, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=', Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7), where F is the integration constant of the integral over z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Similarly, multiplying both sides of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7) by du and integrating give 1 2 � du dz �2 + 1 2Cu2 + U (u) + Fu = G, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=', (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9), where G is the integration constant of the integral over u and � du d2u dz2 = � dz du dz d2u dz2 = 1 2 � dz d dz �du dz �2 = 1 2 � du dz �2 is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Proof of duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Substituting the duality transformations (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='12) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7) gives C C d2v dζ2 + C C (σ − 1) v−1 �dv dζ �2 + σCv + v2(1−σ) dU (vσ) dv + σv1−σF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='13) By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9), we have C C (σ − 1) v−1 �dv dζ �2 = 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] − C (σ − 1) v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='14) Using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='14) to eliminate the term (σ − 1) v−1 � dv dζ �2 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='13), we arrive at C C d2v dζ2 + Cv + 2 (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + v2(1−σ) dU (vσ) dv + σv1−σF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='15) By the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4), we can obtain V (v) = G − Fv − C C v2−2σ [G − U (vσ) − Fvσ] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='16) Taking the derivative of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='16) with respect to v gives dV (v) dv = −F + 2 C C (σ − 1) v1−2σ [G − U (vσ) − Fvσ] + C C v2(1−σ) �dU (vσ) dv + σvσ−1F � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='17) Substituting (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='17) into (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='15) gives d2v dζ2 + Cv + dV (v) dv + F = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='18) Then taking the derivative with respect to ζ and using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='6), we arrive at (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Discussion of U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The relation between f (u) in the GKdV equation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) and U (u) in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='5) is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) = U (u) + au + b and U (u) lead to the same f (u), and both correspond to the GKdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' – 4 – The integral of motion F, corresponding to U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b), by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7), is F (a, b) = − � d2u dz2 + Cu + dU(u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='a,b) du � = F − a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' the integral of motion G, corresponding to U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) , by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9), is G (a, b) = 1 2 �du dz �2 + 1 2Cu2 + U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) + F (a, b) u = G + b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Therefore, by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4), the duality transfor- mation given by U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) is 1 C u−2 [G (a, b) − U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) − F (a, b) u] = 1 C v−2 [G − V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) − Fv] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='19) Here V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) is the duality of U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Substituting U (u;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b), F (a, b), and G (a, b) into the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='19) gives V (v;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' a, b) = G − Fv − C C v2−2σ [G − U (vσ) − Fvσ] = V (v) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='20) That is, in the GKdV equation, although the correspondence between f (u) and U (u) is not unique, the same f (u) corresponding to different U (u), the choice of U (u) does not influence the duality of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 3 Duality family of KdV equation: Example We consider a special duality family of the GKdV equation as an example in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The KdV equation and mKdV equation are family members of this duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The solutions of all family members in a duality family are related by a duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In a duality family containing the KdV equation, we can solve all the GKdV equations in the family from the solution of the KdV equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In this section, we give the solution of the KdV equation with a power-law nonlinearity from the solution of the KdV equation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' the mKdV equation is the power-law nonlinearity KdV equation with power 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Duality family of the KdV equation and the KdV equation with a power-law nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) with z = x − Ct, d3u dz3 − (C + 6u) du dz = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) has a 1-soliton solution [37] u (z) = −C 2 sech2 �√ C 2 z � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) The soliton solution is a localized traveling wave solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The localization solution, taking the 1-soliton solution as an example, means that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2) when z → ±∞, u (z) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The integral of motion of the 1-soliton solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2), by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='2), is F = 0 and G = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='3) Then the dual equation of the traveling wave KdV equation given by the duality transfor- mation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) is d3v dζ3 − � C + C C (2 + σ) (1 + σ) vσ � dv dζ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) – 5 – Since σ can be chosen arbitrarily, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) is not a single equation but forms a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' All the GKdV equations labeled by different σ in the duality family are dual equations of the KdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='12), we can obtain the solution of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) v (ζ) = � −C 2 sech2 �√ C 2 σζ ��1/σ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='5) where ζ = x − Ct has a velocity −C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Instead of z, rewrite the dual equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) by (t, x): ∂v ∂t + αvσ ∂v ∂x + ∂3v ∂x3 = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='6) where α = − C C (2 + σ) (1 + σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' When σ is taken as a positive integer, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='6) is the KdV equation with a power-law nonlinearity, and the solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='5) becomes v (x, t) = � −C 2 sech2 �√ C 2 σ (x − Ct) ��1/σ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7) or equivalently, v (x, t) = � C(2+σ)(1+σ) 2α cosh2� √ C 2 σ(x−Ct) � �1/σ , which agrees with Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In this duality family, the family member σ = 1 is the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1), and the family member σ = 2 is the mKdV equation ∂v ∂t − 12 C C v2 ∂v ∂x + ∂3v ∂x3 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='8) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7) with σ = 2 gives the 1-soliton solution of the mKdV equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='8) v (x, t) = ± � −C 2 sech �√ C (x − Ct) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9) Now, by the duality relation, we have obtained all family members’ solutions from the KdV equation’s solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' A duality exists between the periodic solution and the soliton solution of the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' We take the periodic solution of the KdV equation and the soliton solution of the mKdV equation as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) has a periodic solution u (x, t) = 1 6C � 1 + 3 tan2 �√ C 2 (x − Ct) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='10) The KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1) with z = x − Ct becomes (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1), and its solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='10) becomes u (z) = C 6 � 1 + 3 tan2 �C 2 z �� (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='11) – 6 – with the period 2π √ C .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The integral of motion of the periodic solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='10) of the KdV equation, by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='7), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='9) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='10), is F = 0, G = −C3 54 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='12) The dual equation of the traveling wave KdV equation given by the duality transformation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4) is then d3v dζ3 + � C − 1 27 (1 − σ) (1 − 2σ) CC2v−2σ + C C (σ + 1) (σ + 2) vσ � dv dζ = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='13) where ζ = x+Ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The duality transformations (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='11) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='12) give the solution of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' v (ζ) = � C 6 � 1 − 3 tanh2 �√ C 2 σζ ���1/σ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='14) σ running over all possible values gives all equations and their solutions in the duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The family member σ = 1 and C = −C in the duality family is the KdV equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Different from the 1-soliton solution (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='4), however, the family member σ = −1 is the traveling wave mKdV equation d3v dζ3 + C � 1 − 2 9C2v2 � dv dζ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='15) or, with ζ = x + Ct and C = 27 C2 , ∂v ∂t − 6v2 ∂v ∂x + ∂3v ∂x3 = 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='16) which, by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='14), has a traveling wave solution v (x, t) = 2 √ C √ 3 � 1 − 3 tanh2 � √ C 2 (x + Ct) ��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='17) It can be directly verified that v (x, t) → − √ 3C 3 when x, t → ±∞, so (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='17) is a soliton solution of the mKdV equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In this example, the duality of the periodic solution is a soliton solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Indirect approach for solving equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' The existence of the duality family gives us an indirect approach to solving equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' When solving an equation, we can (1) find its duality family;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' (2) look for and solve an ‘easy’ family member, and (3) achieve the solution of this equation by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 4 Conclusion This paper reveals a duality among the GKdV equations, and all the GKdV equations that are dual to each other form a duality family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In a duality family, the solutions of different family members are related by the duality transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' – 7 – In a duality family, we only need to solve one family member, and the duality trans- formation can give solutions for all other family members.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' This allows us to develop an indirect approach to solving the GKdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In this paper, as an example, we discuss the GKdV equation duality family containing the KdV equation and the KdV equation with a power-law nonlinearity: seeking 1-soliton solution of the KdV equation with a power-law nonlinearity from a 1-soliton solution of the KdV equation by the duality relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' In another example, we consider the periodic solution-soliton solution duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' By the duality transformation, we give a soliton solution of the mKdV equation from a periodic solution of the KdV equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Acknowledgments We are very indebted to Dr G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zeitrauman for his encouragement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' This work is supported in part by Special Funds for theoretical physics Research Program of the NSFC under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 11947124, and NSFC under Grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 11575125 and 11675119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' References [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Ablowitz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Ablowitz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Clarkson, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Clarkson, Solitons, nonlinear evolution equations and inverse scattering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Cambridge university press, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Kordeweg and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' de Vries, On the change of form of long waves advancing in a rectangular channel, and a new type of long stationary wave, Philos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Mag 39 (1895) 422–443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Peregrine, Calculations of the development of an undular bore, Journal of Fluid Mechanics 25 (1966), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 2 321–330.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [4] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Karakoc and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Ali, New exact solutionsand numerical approximations of the generalized kdv equation, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Silem, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Wu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhang, Nonisospectral effects on generating localized waves, Communications in Theoretical Physics 73 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 11 115002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zabusky, A synergetic approach to problems of nonlinear dispersive wave propagation and interaction, in Nonlinear partial differential equations, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 223–258.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Elsevier, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [7] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Van Wijngaarden, On the equations of motion for mixtures of liquid and gas bubbles, Journal of fluid mechanics 33 (1968), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 3 465–474.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Konno and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Ichikawa, A modified korteweg de vries equation for ion acoustic waves, Journal of the Physical Society of Japan 37 (1974), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 6 1631–1636.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Haas, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Garcia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Goedert, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Manfredi, Quantum ion-acoustic waves, Physics of Plasmas 10 (2003), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 10 3858–3866.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [10] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Schamel, A modified korteweg-de vries equation for ion acoustic wavess due to resonant electrons, Journal of Plasma Physics 9 (1973), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 3 377–387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [11] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Faddeev and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Korepin, Quantum theory of solitons, Physics Reports 42 (1978), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1 1–87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Korkmaz, Numerical algorithms for solutions of korteweg–de vries equation, Numerical methods for partial differential equations 26 (2010), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 6 1504–1521.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' – 8 – [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Lamb Jr, Elements of soliton theory, New York (1980) 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Biswas, 1-soliton solution of the k (m, n) equation with generalized evolution, Physics Letters A 372 (2008), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 25 4601–4602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhou, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, Application of a homogeneous balance method to exact solutions of nonlinear equations in mathematical physics, Physics Letters A 216 (1996), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1-5 67–75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Kudryashov, Exact soliton solutions of the generalized evolution equation of wave dynamics, Journal of applied mathematics and mechanics 52 (1988), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 3 361–365.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [17] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dorfman, Dirac structures and integrability of nonlinear evolution equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Wiley, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [18] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Drazin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Johnson, Solitons: an introduction, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Cambridge university press, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Melo, Generalized solutions to the gkdv equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=', Electronic Journal of Differential Equations (EJDE)[electronic only] 2010 (2010) Paper–No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Wazwaz, New sets of solitary wave solutions to the kdv, mkdv, and the generalized kdv equations, Communications in Nonlinear Science and Numerical Simulation 13 (2008), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 2 331–339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Sun, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhou, Solutions to the modified korteweg–de vries equation, Reviews in Mathematical Physics 26 (2014), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 07 1430006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [22] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Miura, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Gardner, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Kruskal, Korteweg-de vries equation and generalizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' existence of conservation laws and constants of motion, Journal of Mathematical physics 9 (1968), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 8 1204–1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhang, Wronskian solutions of integrable systems, in Nonlinear Systems and Their Remarkable Mathematical Structures, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 415–444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Chapman and Hall/CRC, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhao and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Zhang, Rational solutions to q3δ in the adler-bobenko-suris list and degenerations, Journal of nonlinear mathematical physics 26 (2019), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1 107–132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [25] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Wadati, Wave propagation in nonlinear lattice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' i, Journal of the Physical Society of Japan 38 (1975), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 3 673–680.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [26] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Narayanamurti and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Varma, Nonlinear propagation of heat pulses in solids, Physical Review Letters 25 (1970), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 16 1105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [27] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Tappert and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Varma, Asymptotic theory of self-trapping of heat pulses in solids, Physical Review Letters 25 (1970), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 16 1108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Chandrasekhar, Newton’s Principia for the common reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Oxford University Press, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [29] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Arnold, Huygens and Barrow, Newton and Hooke: pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Springer Science & Business Media, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [30] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Needham, Visual complex analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Oxford University Press, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [31] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Arnol’d, Mathematical methods of classical mechanics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Springer Science & Business Media, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [32] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dai, Duality family of scalar field, Nuclear Physics B 972 (2021) 115569.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' – 9 – [33] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dai, Solving eigenproblem by duality transform, Annals of Physics 443 (2022) 168962.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [34] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dai, An indirect approach for quantum-mechanical eigenproblems: duality transforms, Communications in Theoretical Physics 74 (2022), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 5 055103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [35] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dai, Exactly solvable gross–pitaevskii type equations, Journal of Physics Communications 5 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1 015011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [36] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Li and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Dai, Long-range potential scattering: Converting long-range potential to short-range potential by tortoise coordinate, Journal of Mathematical Physics 62 (2021), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 12 122102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [37] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Griffiths and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Schiesser, Traveling wave analysis of partial differential equations: numerical and analytical methods with MATLAB and Maple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Academic Press, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' [38] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' Hayek, Constructing of exact solutions to the kdv and burgers equations with power-law nonlinearity by the extended g’ g-expansion method, Applied Mathematics and Computation 217 (2010), no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 1 212–221.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} +page_content=' 10' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2dAyT4oBgHgl3EQfPvZl/content/2301.00030v1.pdf'} diff --git a/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf b/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..40076352c756fe36054b3676a19c8940c4d16792 --- /dev/null +++ b/2dE4T4oBgHgl3EQfagyV/content/2301.05065v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:960a223ca27981df8a19ccdf9b189d850f7ecb2ab1719646f5dd6ad3fbc64ed8 +size 453802 diff --git a/2dE4T4oBgHgl3EQfagyV/vector_store/index.pkl b/2dE4T4oBgHgl3EQfagyV/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..ecb3d8fd865e681e219045d7bc8de29e04a0b4d3 --- /dev/null +++ b/2dE4T4oBgHgl3EQfagyV/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b45014918474daef645b8f4512cb0538c28c920dfd825899b940839041fd2110 +size 329099 diff --git a/49AzT4oBgHgl3EQfEPqD/vector_store/index.pkl b/49AzT4oBgHgl3EQfEPqD/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..0e78fdc23829757cbc84bd9e7b8f8249dd0b6b10 --- /dev/null +++ b/49AzT4oBgHgl3EQfEPqD/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8816543420058486898207f7b244926364593e6fa7c94db41292f193b227c8fe +size 46125 diff --git a/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/2301.01925v1.pdf.txt b/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/2301.01925v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..1d1d8db7d2a4414af28a838ac3e514fc838f4987 --- /dev/null +++ b/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/2301.01925v1.pdf.txt @@ -0,0 +1,1743 @@ +arXiv:2301.01925v1 [math.NT] 5 Jan 2023 +SELBERG’S CENTRAL LIMIT THEOREM OF L-FUNCTIONS NEAR +THE CRITICAL LINE +YOONBOK LEE +Abstract. We find an asymptotic expansion of a multi-dimensional version of Sel- +berg’s central limit theorem for L-functions on σ = 1 +2 + (log T )−θ and t ∈ [T, 2T ], +where 0 < θ < 1 +2 is a constant. +1. Introduction +Selberg’s central limit theorem says that the function +log ζ(σ + it) +� +π � +p 0 throughout the paper. See [8, Theorem 6.1] for a proof and [6] for a simple +proof for the real part. It also holds for other L-functions. See [7, Theorem 2] for a +general statement. +When σ = σT and T ≤ t ≤ 2T, we have more precise estimations for the distribution +of log ζ(σ + it) in [2] and [5] as follows. +Theorem 1.1. [5, Theorem 1.2 and Lemma 2.3] Let 0 < θ < 1 +2, a < b and c < d be +real numbers. There exist constants ǫ, κ > 0 and a sequence {dk,ℓ}k,ℓ≥0 of real numbers +such that +(1.1) +1 +T meas{t ∈ [T, 2T] : log ζ(σT + it) +√πψT +∈ [a, b] × [c, d]} += +� +k+ℓ≤ǫψT +dk,ℓ +√ψT +k+ℓ +� b +a +e−πu2Hk(√πu)du +� d +c +e−πv2Hℓ(√πv)dv + O +� +1 +(log T)κ +� +as T → ∞, where meas denotes the Lebesgue measure on R, +ψT := +� +p +� +k≥1 +1 +k2p2kσT +Date: January 6, 2023. +2010 Mathematics Subject Classification. 11M41. +Key words and phrases. Central limit theorem, joint distribution of L-functions. +1 + +2 +YOONBOK LEE +and Hn(x) is the n-th Hermite polynomial defined by +(1.2) +Hn(x) := (−1)nex2 dn +dxn(e−x2). +Moreover, d0,0 = 1, dk,ℓ = 0 for k + ℓ = 1, 2 and dk,ℓ = O(δ−k−ℓ +0 +) for some δ0 > 0 and +all k, ℓ. +The leading term of the expansion in (1.1) is +� b +a +e−πu2du +� d +c +e−πv2dv, +which is Gaussian, and the lower order terms may be evaluated using +� b +a +e−πu2Hk(√πu)du = −1 +√π +� +e−πb2Hk−1(√πb) − e−πa2Hk−1(√πa) +� +for k ≥ 1. Note that the sequence {dk,ℓ} is defined by the generating series (2.19) in +[5] and ψT = θ log log T + O(1) by the prime number theorem. It might be interesting +to compare the asymptotic expansion in (1.1) with an Edgeworth expansion in the +probability theory. See [1, Chapter 7] for more information. +In this paper, we generalize Theorem 1.1 to a multi-variate setting for the L- +functions L1, . . . , LJ satisfying the following assumptions: +A1: (Euler product) For j = 1, . . . , J and Re(s) > 1 we have +Lj(s) = +� +p +d +� +i=1 +� +1 − αj,i(p) +ps +�−1 +, +where |αj,i(p)| ≤ pη for some fixed 0 ≤ η < 1 +2 and for every i = 1, . . . , d. +A2: (Functional equation) The functions L1, L2, . . . , LJ satisfy the same functional +equation +Λj(s) = ωΛj(1 − ¯s), +where +Λj(s) := Lj(s)Qs +k +� +ℓ=1 +Γ(λℓs + µℓ), +|ω| = 1, Q > 0, λℓ > 0 and µℓ ∈ C with Re(µℓ) ≥ 0. +A3: (Ramanujan hypothesis on average) +� +p≤x +d +� +i=1 +|αj,i(p)|2 = O(x1+ǫ) +holds for every ǫ > 0 and for every j = 1, . . . , J as x → ∞. + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +3 +A4: (Zero density hypothesis) Let Nf(σ, T) be the number of zeros of f(s) in Re(s) ≥ +σ and 0 ≤ Im(s) ≤ T. Then there exist positive constants κ1, κ2 such that for +every j = 1, . . . , J and all σ ≥ 1 +2 we have +NLj(σ, T) ≪ T 1−κ1(σ− 1 +2 )(log T)κ2. +A5: (Selberg orthogonality conjecture) By assumption A1 we can write +log Lj(s) = +� +p +∞ +� +k=1 +βLj(pk) +pks +. +Then for all 1 ≤ j, k ≤ J, there exist constants ξj > 0 and cj,k such that +� +p≤x +βLj(p)βLk(p) +p += δj,kξj log log x + cj,k + O +� +1 +log x +� +, +where δj,k = 0 if j ̸= k and δj,k = 1 if j = k. +The assumptions A1–A5 are standard and expected to hold for all L-functions arising +from automorphic representation for GL(n). In particular, they are verified by GL(1) +and GL(2) L-functions, which are the Riemann zeta function, Dirichlet L-functions, +L-functions attached to Hecke holomorphic or Maass cusp forms. Assumption A4 is +weaker than the Riemann hypothesis, but it is strong enough to find a short Dirichlet +approximation to each log Lj(σT + it) for almost all t ∈ [T, 2T]. For example, see [4, +Lemma 4.2] for a proof. Assumption A5 insures the statistical independence of the +log Lj(σT + it) for j = 1, . . . , J. +Assuming assumptions A1–A5 for L1, . . . , LJ, we want to find an asymptotic ex- +pansion for +(1.3) +1 +T meas{t ∈ [T, 2T] : log Lj(σT + it) +� +πψj,T +∈ [aj, bj] × [cj, dj] for all j = 1, . . . , J}, +where +(1.4) +ψj,T := ξjθ log log T +with the constants ξj in assumption A5 and aj, bj, cj, dj are real numbers for all j = +1, . . . , J. Let +L(s) := +� +log |L1(s)|, . . . , log |LJ(s)|, arg L1(s), . . . , arg LJ(s) +� +and +RT := +J� +j=1 +[aj +� +πψj,T, bj +� +πψj,T] × +J� +j=1 +[cj +� +πψj,T, dj +� +πψj,T], +then (1.3) equals to +ΦT(RT ) := 1 +T meas{t ∈ [T, 2T] : L(σT + it) ∈ RT }. + +4 +YOONBOK LEE +Theorem 1.2. Let 0 < θ < 1 +2. Assume assumptions A1–A5 for L1, . . . , LJ. Then there +exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that +(1.5) +ΦT(RT) = +� +K(k+l)≤ǫ log log T +bk,l +J� +j=1 +1 +� +ψj,T +kj+ℓj +× +J +� +j=1 +� � bj +aj +e−πu2Hkj(√πu)du +� dj +cj +e−πv2Hℓj(√πv)dv +� ++ O +� +1 +(log T)κ +� +, +where k = (k1, . . . , kJ) and l = (ℓ1, . . . , ℓJ) are vectors in (Z≥0)J and K(k) := k1 + +· · · + kJ. Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) +0 +) for some +δ0 > 0 and all k, l. +Theorem 1.2 will be proved in the beginning of Section 2. Theorem 1.2 is essentially +the same as Theorem 2.1 in [3], but it looks that the expansion in Theorem 1.2 is +longer. Moreover, since the paper [3] contains only a sketched proof, our proof should +be useful. +Unlike dk,ℓ in Theorem 1.1, bk,l in Theorem 1.2 may not be zero for K(k + l) = 2. +One reason is that ψT in Theorem 1.1 and ψj,T in Theorem 1.2 are different up to +a constant order, even though they are asymptotically same. Moreover, when J > 1, +there are additional terms essentially from the constants cj,k in assumption A5. +Since the leading term in (1.5) is Gaussian and the other nonvanishing terms are +O +� +1 +log log T +� +, we obtain the following corollary. +Corollary 1.3. Let 0 < θ < 1 +2. Assume assumptions A1–A5 for L1, . . . , LJ. Then we +have +ΦT(RT ) = +J� +j=1 +� � bj +aj +e−πu2du +� dj +cj +e−πv2dv +� ++ O +� +1 +log log T +� +. +We will prove theorems and propositions in Section 2 and lemmas in Section 3. We +conclude the introduction with a summary of notations: +• σT = σT (θ) = 1 +2 + +1 +(log T)θ and 0 < θ < 1 +2. +• k = (k1, . . . , kJ) and l = (ℓ1, . . . , ℓJ) are vectors in (Z≥0)J. +• u = (u1, . . . , uJ), v = (v1, . . . , vJ), x = (x1, . . . , xJ) and y = (y1, . . . , yJ) are +vectors in RJ. +• z = (z1, . . . , zJ) = x + iy and ¯z = (z1, . . . , zJ) = x − iy are vectors in CJ. +• k! := k1! · · · kJ! and K(k) := k1 + · · · + kJ. +• xk := xk1 +1 · · ·xkJ +J . +• x · u = �J +j=1 xjuj, ||z|| = +��J +j=1 |zj|2 = +��J +j=1(x2 +j + y2 +j). + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +5 +2. Estimates on random model +We define the random vector +L(σ, X) = +� +log |L1(σ, X)|, . . . , log |LJ(σ, X)|, arg L1(σ, X), . . . , arg LJ(σ, X) +� +for σ > 1 +2, where each Lj(σ, X) is defined by the product +(2.1) +Lj(σ, X) = +� +p +d +� +i=1 +� +1 − αj,i(p)X(p) +pσ +�−1 +and {X(p)}p is a sequence of independent random variables, indexed by the prime +numbers, and uniformly distributed on the unit circle {z ∈ C : |z| = 1}. The product +converges almost surely for σ > 1 +2 by Kolmogorov’s three series theorem. +Define a probability measure +(2.2) +Φrand +T +(B) := P(L(σT, X) ∈ B) +for a Borel set B in R2J. By [4, Theorem 2.3] we have +ΦT (RT) = Φrand +T +(RT ) + O((log T)(θ−1)/2 log log T) +for 0 < θ < 1 +2. It means that the distribution of L(σT + it) is well approximated by the +distribution of its random model L(σT , X) when 0 < θ < 1 +2. Thus, Theorem 1.2 is an +immediate consequence of the following theorem. +Theorem 2.1. Let 0 < θ < 1 +2. Assume assumptions A1–A5 for L1, . . . , LJ. Then there +exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that +Φrand +T +(RT) = +� +K(k+l)≤ǫ log log T +bk,l +J� +j=1 +1 +� +ψj,T +kj+ℓj +× +J +� +j=1 +� � bj +aj +e−πu2Hkj(√πu)du +� dj +cj +e−πv2Hℓj(√πv)dv +� ++ O +� +1 +(log T)κ +� +. +Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) +0 +) for some δ0 > 0 +and all k, l. +In [4, Section 7] we find that the measure Φrand +T +is absolutely continuous and it has +a density function HT(u, v) such that +(2.3) +Φrand +T +(RT) = +�� +RT +HT(u, v)dudv. +Hence, Theorem 2.1 follows from (2.3) and the following proposition, which upgrades +[4, Lemma 7.4]. + +6 +YOONBOK LEE +Proposition 2.2. Let 0 < θ < 1 +2. Assume assumptions A1–A5 for L1, . . . , LJ. There +exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that +HT(u, v) = +� +K(k+l)≤ǫ log log T +bk,l +J� +j=1 +1 +π +� +ψj,T +kj+ℓj+2e +− +u2 +j +v2 +j +ψj,T Hkj +� +uj +� +ψj,T +� +Hℓj +� +vj +� +ψj,T +� ++ O +� +1 +(log T)κ +� +. +Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) +0 +) for some δ0 > 0 +and all k, l. +To prove Proposition 2.2, it requires to understand the Fourier transform +�Φrand +T +(x, y) := +� +R2J e2πi(x·u+y·v)dΦrand +T +(u, v) +for x, y ∈ RJ. By the definition of Φrand +T +in (2.2), we have +�Φrand +T +(x, y) = E +� +exp +� +2πi +J +� +j=1 +� +xj log |Lj(σT , X)| + yj arg Lj(σT, X) +� +�� +. +By assumptions A1 and A5 we see that +(2.4) +βLj(pk) = 1 +k +d +� +i=1 +αj,i(p)k. +By (2.4) and (2.1) we have +log Lj(σ, X) = +� +p +∞ +� +k=1 +βLj(pk)X(p)k +pkσ +. +Define +(2.5) +gj,p(σ) := +∞ +� +k=1 +βLj(pk)X(p)k +pkσ +, +then we have +(2.6) +�Φrand +T +(x, y) = +� +p +ϕp,σT (x, y), +where +ϕp,σ(x, y) := E +� +exp +� +2πi +J +� +j=1 +� +xjRe (gj,p(σ)) + yjIm (gj,p(σ)) +� +�� +for each prime p. Let z = (z1, . . . , zJ) = x + iy, then we find that +ϕp,σ(x, y) = E +� J� +j=1 +eπizjgj,p(σ)eπizjgj,p(σ) +� +. + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +7 +By expanding the 2J exponential functions into power series we obtain +ϕp,σ(x, y) = +� +k,l∈(Z≥0)J +(πi)K(k+l)zkzl +k!l! +E +� +J� +j=1 +gj,p(σ)kjgj,p(σ) +ℓj +� +with notations for vectors in the end of Section 1. It is easy to see that the expectation +(2.7) +Ap,σ(k, l) := E +� +J� +j=1 +gj,p(σ)kjgj,p(σ) +ℓj +� +satisfies Ap,σ(0, 0) = 1 and Ap,σ(0, k) = Ap,σ(k, 0) = 0 for k ̸= 0. Thus, we obtain +(2.8) +ϕp,σ(x, y) = 1 + Rp,σ(z), +where +(2.9) +Rp,σ(z) := +� +k̸=0 +� +l̸=0 +(πi)K(k+l)zkzl +k!l! +Ap,σ(k, l). +Hence, by (2.6) and (2.8) we have +(2.10) +�Φrand +T +(x, y) = +� +p +(1 + Rp,σT (z)). +To compute the product in (2.10), it requires the following lemma. +Lemma 2.3. There exists a constant δ1 > 0 such that +|Rp,σT (z)| ≤ 1 +2 +for every prime p and ||z|| ≤ δ1. +See Section 3.1 for a proof. By Lemma 2.3 we have +�Φrand +T +(x, y) = exp +� � +p +log(1 + Rp,σT (z)) +� += exp +� � +p +∞ +� +m=1 +(−1)m−1 +m +Rp,σT (z)m +� +(2.11) +for ||z|| ≤ δ1. By (2.9) the sum � +p +�∞ +m=1 +(−1)m−1 +m +Rp,σ(z)m has a power series represen- +tation in z1, . . . , zJ, z1, . . . , zJ, so let Bσ(k, l) be the coefficients such that +(2.12) +� +k̸=0 +� +l̸=0 +Bσ(k, l)zkzl = +� +p +∞ +� +m=1 +(−1)m−1 +m +Rp,σ(z)m. +Define In,σ(z) for each n ≥ 2 by the sum of the degree n terms in the above sum, i.e., +(2.13) +In,σ(z) := +� +k,l̸=0 +K(k+l)=n +Bσ(k, l)zkzl. + +8 +YOONBOK LEE +We see that In,σ(z) is a homogeneous polynomial in x1, . . . , xJ, y1, . . . , yJ of degree n, +and that +(2.14) +�Φrand +T +(x, y) = exp +� ∞ +� +n=2 +In,σT (z) +� +for ||z|| ≤ δ1 by (2.11)–(2.13). We find an asymptotic formula for In,σT (z) as T → ∞ +in the following lemma. +Lemma 2.4. There are complex numbers Cj1,j2 such that +(2.15) +I2,σT (z) = −π2 +J +� +j=1 +ψj,T|zj|2 + +J +� +j1,j2=1 +Cj1,j2zj1zj2 + O +�log log T +(log T)θ +� +for ||z|| ≤ δ1, where ψj,T is defined in (1.4) and Cj1,j2 = Cj2,j1. For n ≥ 3, there is a +constant C = CJ,d,η > 0 such that +|In,σ(z)| ≤ Cn||z||n +for σ ≥ 1 +2 and +|In,σT (z) − In,1/2(z)| ≤ Cn||z||n +(log T)θ . +See Section 3.2 for a proof. Define +(2.16) +QT(z) := −π2 +J +� +j=1 +ψj,T|zj|2, +(2.17) +I2(z) := +J +� +j1,j2=1 +Cj1,j2zj1zj2 +and +(2.18) +In(z) := In,1/2(z) +for n > 2. By (2.17) and the Cauchy-Schwarz inequality we obtain +|I2(z)| ≤ J(max +j1,j2 |Cj1,j2|)||z||2. +By this inequality, (2.18) and Lemma 2.4 we have +(2.19) +|In(z)| ≤ 2−n +for n ≥ 2 and ||z|| ≤ δ2, where +(2.20) +δ2 := min +� +δ1, 1 +2C , +1 +2 +� +J maxj1,j2 |Cj1,j2| +� +. + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +9 +It follows from (2.14), Lemma 2.4 and (2.16)–(2.19) that +�Φrand +T +(x, y) = exp +� +QT(z) + +∞ +� +n=2 +In(z) + O +�log log T +(log T)θ +�� += eQT (z) +� ∞ +� +r=0 +1 +r! +� +∞ +� +n=2 +In(z) +�r ++ O +�log log T +(log T)θ +�� +(2.21) +for ||z|| ≤ δ2. Note that each In(z) is a homogeneous polynomial in x1, . . . , xJ, y1, . . . , yJ +of degree n and does not depend on T. Since the sum �∞ +r=0 +1 +r! +� �∞ +n=2 In(z) +�r is a power +series in x and y, we let {bk,l} be a sequence of complex numbers such that +(2.22) +G(x, y) := +� +k,l +(2πi)K(k+l)bk,lxkyl = +∞ +� +r=0 +1 +r! +� +∞ +� +n=2 +In(z) +�r +. +Then the bk,l satisfy the following properties. +Lemma 2.5. Let δ3 be a constant satisfying 0 < δ3 < +π +√ +J δ2, then bk,l is a real number +and +(2.23) +|bk,l| ≤ +√e +δK(k+l) +3 +for every k, l. In particular, b0,0 = 1 and bk,l = 0 if K(k + l) = 1. +See Section 3.3 for a proof. The infinite sum over k, l in (2.22) can be approximated +by its partial sum. We shall prove a quantitative version. Let ǫ > 0. By (2.22) and +(2.19) we have +���� +� +K(k+l)>ǫ log log T +(2πi)K(k+l)bk,lxkyl +���� ≤ +∞ +� +r=1 +1 +r! +� +n1,...,nr≥2 +n1+···+nr>ǫ log log T +�1 +2 +�n1+···+nr +≤ +∞ +� +r=1 +1 +r! +� +m>ǫ log log T +1 +2m +� +n1,...,nr≥2 +n1+···+nr=m +1 +for ||z|| ≤ δ2. We substitute nj by n′ +j + 2 for j = 1, . . . , r in the last sum, then the last +sum equals to the number of nonnegative integers n′ +1, . . . , n′ +r such that n′ +1 + . . . + n′ +r = +m − 2r, which equals to +�m−r−1 +r−1 +� +. Thus, the above sum is +≤ +∞ +� +r=1 +1 +r! +� +m>ǫ log log T +1 +2m +�m − r − 1 +r − 1 +� +≤ +∞ +� +r=1 +1 +r! +� +m>ǫ log log T +1 +2m +mr−1 +(r − 1)! +≤ +� +m>ǫ log log T +1 +2m +∞ +� +n=0 +mn +(n!)2 ≤ +� +m>ǫ log log T +1 +2m +� +∞ +� +n=0 +√mn +n! +�2 += +� +m>ǫ log log T +e2√m +2m +≤ +� +m>ǫ log log T +�2 +3 +�m +≤ 3 +�2 +3 +�ǫ log log T +≪ +1 +(log T)κ + +10 +YOONBOK LEE +with a constant κ ≤ ǫ log 3 +2. It follows from these estimates, (2.21), (2.22) and Lemma +2.5 we obtain the following proposition. +Proposition 2.6. Let δ2 be the constant defined in (2.20). Let κ and ǫ be constants +such that 0 < κ < θ and κ ≤ ǫ log 3 +2. Let {bk,l} be a sequence of real numbers defined +by its generating series (2.22). Then +�Φrand +T +(x, y) = eQT (z) +� +� +K(k+l)≤ǫ log log T +(2πi)K(k+l)bk,lxkyl + O +� +1 +(log T)κ +�� +holds for ||z|| ≤ δ2. +We are ready to prove Proposition 2.2. The density function HT(u, v) of the measure +Φrand +T +is the inverse Fourier transform of �Φrand +T +, so that +HT(u, v) = +� +RJ +� +RJ +�Φrand +T +(x, y)e−2πi(x·u+y·v)dxdy. +Let δ4 be a constant such that 0 < δ4 ≤ min{δ2, δ3 +4π}. By Lemma 7.1 and (7.14) in [4] +we find that +HT(u, v) = +�� +||z||≤δ4 +�Φrand +T +(x, y)e−2πi(x·u+y·v)dxdy + O +� +1 +(log T)κ +� +for some κ > 0. See the proof of [4, Lemma 7.4] for a detail. +By Proposition 2.6 we have +HT(u, v) = +� +K(k+l)≤ǫ log log T +(2πi)K(k+l)bk,l +�� +||z||≤δ4 +eQT (z)−2πi(x·u+y·v)xkyldxdy+O +� +1 +(log T)κ +� +for some ǫ, κ > 0. Let ξmin = minj≤J ξj > 0, then we have +���� +�� +||z||≥δ4 +eQT (z)−2πi(x·u+y·v)xkyldxdy +���� ≤ +�� +||z||≥δ4 +e−π2ξminθ log log T||z||2||z||K(k+l)dxdy +≪ +� ∞ +δ4 +e−(π2ξminθ log log T)r2rK(k+l)+2J−1dr +≪ +1 +(π2ξminθ log log T) +K(k+l) +2 ++J +� ∞ +πδ4 +√ξminθ log log T +e−r2rK(k+l)+2J−1dr +by the change of variables to the polar coordinates. By the Cauchy-Schwarz inequality +we have +� ∞ +X +e−r2rMdr ≤ +�� ∞ +X +e−r2rdr +� ∞ +0 +e−r2r2M−1dr = +� +(M − 1)! +2 +e− 1 +2 X2. +Hence, it follows from Lemma 2.5 and the above estimations that +HT(u, v) = +� +K(k+l)≤ǫ log log T +(2πi)K(k+l)bk,l +� +RJ +� +RJ eQT (z)−2πi(x·u+y·v)xkyldxdy + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +11 ++ O +� +1 +(log T) +1 +2π2δ2 +4ξminθ +� +K(k+l)≤ǫ log log T +�2π +δ3 +�K(k+l) � +(K(k + l) + 2J − 2)! +(π2ξminθ log log T) +K(k+l) +2 ++J +� ++ O +� +1 +(log T)κ +� +. +By Stirling’s formula the k, l-sum in the above O-term is +≪ +� +K(k+l)≤ǫ log log T +�2π +δ3 +�K(k+l) +1 +(π2ξminθ log log T) +K(k+l) +2 ++J +�2ǫ log log T +e +� K(k+l) +2 ++J− 3 +4 +≪ +� +k,l +� +2 +√ +2ǫ +δ3 +√ξminθe +�K(k+l) +≤ +� +k,l +�1 +2 +�K(k+l) += 22J, +provided that 0 < ǫ ≤ +1 +32δ2 +3ξminθe. With this choice of ǫ, we have +HT(u, v) = +� +K(k+l)≤ǫ log log T +(2πi)K(k+l)bk,l +� +RJ +� +RJ eQT (z)−2πi(x·u+y·v)xkyldxdy ++ O +� +1 +(log T)κ +� +for some κ > 0 +It remains to calculate the above integral. We first write it as repeated integrals +� +RJ +� +RJ eQT (z)−2πi(x·u+y·v)xkyldxdy += +J +� +j=1 +� +R +� +R +e−ψj,T π2(x2 +j+y2 +j )−2πi(xjuj+yjvj)x +kj +j y +ℓj +j dxjdyj += +J +� +j=1 +� +R +e−ψj,T π2x2 +j−2πixjujx +kj +j dxj +� +R +e−ψj,T π2y2 +j −2πiyjvjy +ℓj +j dyj. +Each integral can be written in terms of the Hermite polynomials defined in (1.2). Since +� +R +e−ψπ2x2−2πixuxkdx = +1 +(−2πi)k +dk +duk +� +R +e−ψπ2x2−2πixudx += +1 +(−2πi)k +dk +duk +1 +√πψe− u2 +ψ += +1 +(2πi)k√π√ψ +k+1e− u2 +ψ Hk +� u +√ψ +� +, +we have +� +RJ +� +RJ eQT (z)−2πi(x·u+y·v)xkyldxdy + +12 +YOONBOK LEE += +J� +j=1 +1 +π(2πi)kj+ℓj� +ψj,T +kj+ℓj+2e +− +u2 +j +v2 +j +ψj,T Hkj +� +uj +� +ψj,T +� +Hℓj +� +vj +� +ψj,T +� +. +Thus, we have +HT(u, v) = +� +K(k+l)≤ǫ log log T +bk,l +J� +j=1 +1 +π +� +ψj,T +kj+ℓj+2e +− +u2 +j +v2 +j +ψj,T Hkj +� +uj +� +ψj,T +� +Hℓj +� +vj +� +ψj,T +� ++ O +� +1 +(log T)κ +� +for some ǫ, κ > 0. This completes the proof of Proposition 2.2. +3. Proofs of lemmas +We prove Lemma 2.3 in Section 3.1, Lemma 2.4 in Section 3.2 and Lemma 2.5 in +Section 3.3. In the proofs, we need the inequalities +(3.1) +|βLj(pk)| ≤ d +kpkη +for k ≥ 1, +(3.2) +|βLj(pk)| ≤ 1 +k +d +� +i=1 +|αj,i(p)|k ≤ p(k−2)η +k +d +� +i=1 +|αj,i(p)|2 +for k ≥ 2 +and +(3.3) +|βLj(p)|2 ≤ +� +d +� +i=1 +|αj,i(p)| +�2 +≤ d +d +� +i=1 +|αj,i(p)|2, +which follows by (2.4) and assumpion A1. +3.1. Proof of Lemma 2.3. By (2.5) and (3.1) there is a constant C1 := C1,d,η > 0 +such that +(3.4) +|gj,p(σT)| ≤ +∞ +� +k=1 +d +k +pkη +p +k +2 ≤ +C1 +p +1 +2 −η +for every prime p and j = 1, . . . , J. By (2.7), (2.9) and (3.4) we obtain +|Rp,σT (z)| ≤ +� +k̸=0 +� +l̸=0 +1 +k!l! +� +π||z|| C1 +p +1 +2−η +�K(k+l) += +� +exp +� +J πC1||z|| +p +1 +2−η +� +− 1 +�2 +. +Thus, there exists a constant C2 := C2,d,J,η > 0 such that +|Rp,σT (z)| ≤ +C2 +p1−2η ||z||2 ≤ +C2 +21−2η ||z||2 +for ||z|| ≤ 1 and every prime p. Therefore, there exists a constant δ1 > 0 such that +|Rp,σT (z)| ≤ 1 +2 +for ||z|| ≤ δ1 and every prime p. + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +13 +3.2. Proof of Lemma 2.4. We first find an useful expression +(3.5) +In,σ(z) = (πi)n +� +1≤m≤n/2 +(−1)m−1 +m +� +k1,...,km,l1,...,lm̸=0 +K(k1+···+km+l1+···+lm)=n +zk1+···+kmzl1+···+lm +k1! · · ·km!l1! · · ·lm! +× +� +p +Ap,σ(k1, l1) · · ·Ap,σ(km, lm) +by (2.9), (2.12) and (2.13). Here, the sum over m is 1 ≤ m ≤ n/2 because +n = K(k1 + · · · + km + l1 + · · · + lm) ≥ 2m +for k1, . . . , km, l1, . . . , lm ̸= 0. +The asymptotic (2.15) of I2,σT (z) is known before. See (7.16) of [4, Lemma 7.3]. We +next prove +(3.6) +Cj1,j2 = Cj2,j1. +We have +(3.7) +Ap,σ(k, l) = Ap,σ(l, k) +by (2.7). By (3.5) we also have +(3.8) +I2,σ(z) = I2,σ(z). +So we obtain (3.6) by (2.15) and (3.8). +For the case n > 2, we observe that Ap,σ(k, l) for a real σ can be extended to an +analytic function in a complex variable s via +(3.9) +Ap,s(k, l) = E +� +J� +j=1 +� ∞ +� +k=1 +βLj(pk)X(p)k +pks +�kj� ∞ +� +k=1 +βLj(pk)X(p)k +pks +�ℓj� +. +This observation essentially leads us to prove the following lemma. +Lemma 3.1. Let η be the constant in assumption A1 and assume K(k1 + · · · + km + +l1 + · · · + lm) = n ≥ 3. The Dirichlet series +f(s) := +� +p +Ap,s(k1, l1) · · ·Ap,s(km, lm) +is absolutely convergent for Re(s) ≥ +5+2η +12 . Moreover, there exists a constant C3 = +C3,J,d,η > 0 such that +|f(s)| ≤ Cn +3 +for Re(s) ≥ 5+2η +12 +and +|f(σT) − f( 1 +2)| ≤ +Cn +3 +(log T)θ . + +14 +YOONBOK LEE +Proof. We first show that there is a constant C4 > 0 such that +|f(s)| ≤ Cn +4 +for Re(s) ≥ 5+2η +12 . By (3.9) we find that +|Ap,s(k, l)| ≤ +� ∞ +� +k=1 +maxj≤J |βLj(pk)| +pkRe(s) +�K(k+l) +. +Thus, we have +|f(s)| ≤ +� +p +� +∞ +� +k=1 +maxj≤J |βLj(pk)| +pkRe(s) +�n +≤ 2n � +p +�maxj≤J |βLj(p)| +pRe(s) +�n ++ 2n � +p +� ∞ +� +k=2 +maxj≤J |βLj(pk)| +pkRe(s) +�n +. +(3.10) +The first sum on the right hand side of (3.10) is +� +p +� +maxj≤J |βLj(p)| +�n +pnRe(s) +≤ +� +p +(dpη)n−2� +maxj≤J d �d +i=1 |αj,i(p)|2� +pnRe(s) +≤ dn−1 � +p +�J +j=1 +�d +i=1 |αj,i(p)|2 +p1+ε +≤ Cn +5 +for Re(s) ≥ 5+2η +12 +by (3.1) and (3.3), where ε = 1 +4 − η +2 > 0 and +C5 := max +� +d, +� +p +�J +j=1 +�d +i=1 |αj,i(p)|2 +p1+ε +� +. +Note that the last p-sum is convergent by assumption A3 and a partial summation. +The second sum on the right hand side of (3.10) is +� +p +� ∞ +� +k=2 +maxj≤J |βLj(pk)| +pkRe(s) +�n +≤ +� +p +� +∞ +� +k=2 +maxj≤J +�d +i=1 |αj,i(p)|2 +kpkRe(s)−(k−2)η +�n +≤ +� +p +�maxj≤J +�d +i=1 |αj,i(p)|2 +p2Re(s) +1 +2 +1 +1 − +1 +pRe(s)−η +�n +≤ +�1 +2 +1 +1 − +1 +p +5 +12 (1−2η) +�n � +p +(dp2η)n−1 maxj≤J +�d +i=1 |αj,i(p)|2 +p2nRe(s) +≤ +�1 +2 +1 +1 − +1 +p +5 +12 (1−2η) +�n +dn−1 � +p +�J +j=1 +�d +i=1 |αj,i(p)|2 +p1+6ε +≤ Cn +6 + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +15 +for Re(s) ≥ 5+2η +12 +by (3.2), where +C6 := 1 +2 +1 +1 − +1 +p +5 +12 (1−2η) +max +� +d, +� +p +�J +j=1 +�d +i=1 |αj,i(p)|2 +p1+6ε +� +. +We choose C4 = 2(C5 + C6), then we have +(3.11) +|f(s)| ≤ Cn +4 +for Re(s) ≥ 5+2η +12 . One can easily see in the above estimations that f(s) is absolutely +convergent for Re(s) ≥ 5+2η +12 . +Let ε1 = 1 +2 − 5+2η +12 +> 0. Since +f(σT) − f( 1 +2) = +� σT +1/2 +f ′(u)du = +� σT +1/2 +1 +2πi +� +|z−u|=ε1 +f(z) +(z − u)2dzdu, +we obtain +(3.12) +|f(σT) − f( 1 +2)| ≤ (σT − 1 +2) 1 +ε1 +sup +Re(z)≥ 1 +2 −ε1 +|f(z)| ≤ +Cn +4 +ε1(log T)θ +by (3.11). Let C3 = C4/ε1 > C4, then (3.11) and (3.12) imply both inequalities in the +lemma. +□ +Therefore by Lemma 3.1, (3.5) and Stirling’s formula we have +|In,σ(z)| ≤ ||z||n(πC3)n � +m≤n/2 +1 +m +� +K(k1+···+km+l1+···+lm)=n +1 +k1! · · · km!l1! · · ·lm! += ||z||n(πC3)n � +m≤n/2 +1 +m +(2mJ)n +n! +≤ ||z||n(JπC3)nnn +n! +≤ ||z||n(JπC3e)n +for σ ≥ 5+2η +12 +and n > 2. Similarly, we have +|In,σT (z) − In,1/2(z)| ≤ ||z||n(JπC3e)n +(log T)θ +for n > 2. Therefore, Lemma 2.4 holds with a constant +(3.13) +C = JπC3e. +3.3. Proof of Lemma 2.5. We first consider G(x, y) in (2.22) as a function in complex +variables x1, . . . , xJ, y1, . . . , yJ. We replace xj by +xj +2πi and yj by +yj +2πi for j = 1, . . . , J in +(2.22), then we obtain that +(3.14) +� +k,l +bk,lxkyl = +∞ +� +r=0 +1 +r! +� ∞ +� +n=2 +In(z)(2πi)−n +�r +. + +16 +YOONBOK LEE +Now we consider x1, . . . , xJ, y1, . . . , yJ as real variables. By (3.5) and (3.7) we have +In,σ(z)(2πi)−n = In,σ(z)(2πi)−n, +which implies that In,σ(z)(2πi)−n is a polynomial in real variables x1, . . . , xJ, y1, . . . , yJ +with real coefficients. Since In(z)(2πi)−n is also a homogeneous polynomial in x1, . . . , xJ, +y1, . . . , yJ of degree n with real coefficients, we obtain by comparing coefficients in (3.14) +that bk,l ∈ R, b0,0 = 1 and bk,l = 0 for K(k + l) = 1. +It remains to prove the inequality (2.23). Again we consider G(x, y) defined in (2.22) +as an analytic function in complex variables x1, . . . , xJ, y1, . . . , yJ. Assume that +sup{|x1|, . . . , |xJ|, |y1|, . . . , |yJ|} ≤ +δ2 +2 +√ +J +. +Then we see that +|I2(z)| ≤ +J +� +j1,j2=1 +|Cj1,j2| δ2 +2 +4J ≤ 1 +16 +by (2.17) and (2.20). For n ≥ 3 we have +|In(z)| ≤ +�δ2πC3 +√ +J +�n � +m≤n/2 +1 +m +� +K(k1+···+km+l1+···+lm)=n +1 +k1! · · ·km!l1! · · ·lm! +≤ (δ2 +√ +JπC3e)n ≤ (δ2C)n ≤ 2−n +by (2.18), (2.20), (3.5), (3.13) and Lemma 3.1. Thus, +|G(x, y)| ≤ +∞ +� +r=0 +1 +r! +� +∞ +� +n=2 +|In(z)| +�r +≤ +∞ +� +r=0 +1 +r!2−r = √e. +Let 0 < δ3 +2π = δ′ +3 < +δ2 +2 +√ +J . Since +bk,l = +1 +(2πi)K(k+l)+2J +� +|x1|=δ′ +3 +· · · +� +|xJ|=δ′ +3 +� +|y1|=δ′ +3 +· · · +� +|yJ|=δ′ +3 +G(x, y) +xkyl +dyJ +yJ +· · · dy1 +y1 +dxJ +xJ +· · · dx1 +x1 +by Cauchy’s integral formula, we obtain +|bk,l| ≤ +√e +(2πδ′ +3)K(k+l) = +√e +δK(k+l) +3 +. +Acknowledgements +This work has been supported by the National Research Foundation of Korea (NRF) +grant funded by the Korea government (MSIP) (No. 2019R1F1A1050795). + +CENTRAL LIMIT THEOREM OF L-FUNCTIONS +17 +References +[1] H. Cram´er, Random variables and probability distributions, 3rd edition, Cambridge University +Press, 1970. +[2] J. Ha and Y. Lee, The a-values of the Riemann zeta function near the critical line, J. Math. Anal. +Appl. 464, (2018), 838–863. +[3] D. Hejhal, On Euler products and multi-variate Gaussians, C. R. Acad. Sci. Paris, Ser. I 337 +(2003), 223–226. +[4] Y. Lamzouri and Y. Lee, The number of zeros of linear combinations of L-functions near the +critical line, to appear J. Anal. Math. Preprint available at arXiv:2010.10490. +[5] Y. Lee, An asymptotic expansion of Selberg’s central limit theorem near the critical line, J. Number +Theory 236 (2022), 323–333. +[6] M. Radziwi�l�l and K. Soundararajan, Selberg’s central limit theorem for log |ζ( 1 +2 + it)|, Enseign. +Math. 63 (2017), 1–19. +[7] A. Selberg, Old and new conjectures and results about a class of Dirichlet series, Bombieri, E. +(ed.) et al., Proceedings of the Amalfi conference on analytic number theory, held at Maiori, +Amalfi, Italy, from 25 to 29 September, 1989. Salerno: Universit´a di Salerno, 367–385 (1992) = +Collected Papers, vol. II, 47–63, Springer, 1991. +[8] K.M. Tsang, The distribution of the values of the Riemann zeta-function, ProQuest LLC, Ann +Arbor, MI, 1984, Thesis (Ph.D.)-Princeton University. +Department of Mathematics, Research Institute of Basic Sciences, Incheon Na- +tional University, 119 Academy-ro, Yeonsu-gu, Incheon, 22012, Korea +Email address: leeyb@inu.ac.kr, leeyb131@gmail.com + diff --git a/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/load_file.txt b/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..662331283644e94fd5332cd29f94e5c80d1457bb --- /dev/null +++ b/5NAzT4oBgHgl3EQf9v7Q/content/tmp_files/load_file.txt @@ -0,0 +1,625 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf,len=624 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='01925v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='NT] 5 Jan 2023 SELBERG’S CENTRAL LIMIT THEOREM OF L-FUNCTIONS NEAR THE CRITICAL LINE YOONBOK LEE Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We find an asymptotic expansion of a multi-dimensional version of Sel- berg’s central limit theorem for L-functions on σ = 1 2 + (log T )−θ and t ∈ [T, 2T ], where 0 < θ < 1 2 is a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Introduction Selberg’s central limit theorem says that the function log ζ(σ + it) � π � p 0 throughout the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See [8, Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1] for a proof and [6] for a simple proof for the real part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It also holds for other L-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See [7, Theorem 2] for a general statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' When σ = σT and T ≤ t ≤ 2T, we have more precise estimations for the distribution of log ζ(σ + it) in [2] and [5] as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [5, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3] Let 0 < θ < 1 2, a < b and c < d be real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' There exist constants ǫ, κ > 0 and a sequence {dk,ℓ}k,ℓ≥0 of real numbers such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) 1 T meas{t ∈ [T, 2T] : log ζ(σT + it) √πψT ∈ [a, b] × [c, d]} = � k+ℓ≤ǫψT dk,ℓ √ψT k+ℓ � b a e−πu2Hk(√πu)du � d c e−πv2Hℓ(√πv)dv + O � 1 (log T)κ � as T → ∞, where meas denotes the Lebesgue measure on R, ψT := � p � k≥1 1 k2p2kσT Date: January 6, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 2010 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 11M41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Central limit theorem, joint distribution of L-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 1 2 YOONBOK LEE and Hn(x) is the n-th Hermite polynomial defined by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2) Hn(x) := (−1)nex2 dn dxn(e−x2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, d0,0 = 1, dk,ℓ = 0 for k + ℓ = 1, 2 and dk,ℓ = O(δ−k−ℓ 0 ) for some δ0 > 0 and all k, ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The leading term of the expansion in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) is � b a e−πu2du � d c e−πv2dv, which is Gaussian, and the lower order terms may be evaluated using � b a e−πu2Hk(√πu)du = −1 √π � e−πb2Hk−1(√πb) − e−πa2Hk−1(√πa) � for k ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Note that the sequence {dk,ℓ} is defined by the generating series (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='19) in [5] and ψT = θ log log T + O(1) by the prime number theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It might be interesting to compare the asymptotic expansion in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) with an Edgeworth expansion in the probability theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See [1, Chapter 7] for more information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' In this paper, we generalize Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 to a multi-variate setting for the L- functions L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ satisfying the following assumptions: A1: (Euler product) For j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J and Re(s) > 1 we have Lj(s) = � p d � i=1 � 1 − αj,i(p) ps �−1 , where |αj,i(p)| ≤ pη for some fixed 0 ≤ η < 1 2 and for every i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' A2: (Functional equation) The functions L1, L2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ satisfy the same functional equation Λj(s) = ωΛj(1 − ¯s), where Λj(s) := Lj(s)Qs k � ℓ=1 Γ(λℓs + µℓ), |ω| = 1, Q > 0, λℓ > 0 and µℓ ∈ C with Re(µℓ) ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' A3: (Ramanujan hypothesis on average) � p≤x d � i=1 |αj,i(p)|2 = O(x1+ǫ) holds for every ǫ > 0 and for every j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J as x → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 3 A4: (Zero density hypothesis) Let Nf(σ, T) be the number of zeros of f(s) in Re(s) ≥ σ and 0 ≤ Im(s) ≤ T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then there exist positive constants κ1, κ2 such that for every j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J and all σ ≥ 1 2 we have NLj(σ, T) ≪ T 1−κ1(σ− 1 2 )(log T)κ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' A5: (Selberg orthogonality conjecture) By assumption A1 we can write log Lj(s) = � p ∞ � k=1 βLj(pk) pks .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then for all 1 ≤ j, k ≤ J, there exist constants ξj > 0 and cj,k such that � p≤x βLj(p)βLk(p) p = δj,kξj log log x + cj,k + O � 1 log x � , where δj,k = 0 if j ̸= k and δj,k = 1 if j = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The assumptions A1–A5 are standard and expected to hold for all L-functions arising from automorphic representation for GL(n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' In particular, they are verified by GL(1) and GL(2) L-functions, which are the Riemann zeta function, Dirichlet L-functions, L-functions attached to Hecke holomorphic or Maass cusp forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assumption A4 is weaker than the Riemann hypothesis, but it is strong enough to find a short Dirichlet approximation to each log Lj(σT + it) for almost all t ∈ [T, 2T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' For example, see [4, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2] for a proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assumption A5 insures the statistical independence of the log Lj(σT + it) for j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assuming assumptions A1–A5 for L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ, we want to find an asymptotic ex- pansion for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3) 1 T meas{t ∈ [T, 2T] : log Lj(σT + it) � πψj,T ∈ [aj, bj] × [cj, dj] for all j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J}, where (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) ψj,T := ξjθ log log T with the constants ξj in assumption A5 and aj, bj, cj, dj are real numbers for all j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let L(s) := � log |L1(s)|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , log |LJ(s)|, arg L1(s), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , arg LJ(s) � and RT := J� j=1 [aj � πψj,T, bj � πψj,T] × J� j=1 [cj � πψj,T, dj � πψj,T], then (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3) equals to ΦT(RT ) := 1 T meas{t ∈ [T, 2T] : L(σT + it) ∈ RT }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 4 YOONBOK LEE Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assume assumptions A1–A5 for L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then there exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) ΦT(RT) = � K(k+l)≤ǫ log log T bk,l J� j=1 1 � ψj,T kj+ℓj × J � j=1 � � bj aj e−πu2Hkj(√πu)du � dj cj e−πv2Hℓj(√πv)dv � + O � 1 (log T)κ � , where k = (k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , kJ) and l = (ℓ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , ℓJ) are vectors in (Z≥0)J and K(k) := k1 + · · + kJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) 0 ) for some δ0 > 0 and all k, l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 will be proved in the beginning of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 is essentially the same as Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 in [3], but it looks that the expansion in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 is longer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, since the paper [3] contains only a sketched proof, our proof should be useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Unlike dk,ℓ in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1, bk,l in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 may not be zero for K(k + l) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' One reason is that ψT in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 and ψj,T in Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 are different up to a constant order, even though they are asymptotically same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, when J > 1, there are additional terms essentially from the constants cj,k in assumption A5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since the leading term in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) is Gaussian and the other nonvanishing terms are O � 1 log log T � , we obtain the following corollary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assume assumptions A1–A5 for L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then we have ΦT(RT ) = J� j=1 � � bj aj e−πu2du � dj cj e−πv2dv � + O � 1 log log T � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We will prove theorems and propositions in Section 2 and lemmas in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We conclude the introduction with a summary of notations: σT = σT (θ) = 1 2 + 1 (log T)θ and 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' k = (k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , kJ) and l = (ℓ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , ℓJ) are vectors in (Z≥0)J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' u = (u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , uJ), v = (v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , vJ), x = (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ) and y = (y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ) are vectors in RJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , zJ) = x + iy and ¯z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , zJ) = x − iy are vectors in CJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' := k1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · · kJ!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' and K(k) := k1 + · · · + kJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' xk := xk1 1 · · ·xkJ J .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' x · u = �J j=1 xjuj, ||z|| = ��J j=1 |zj|2 = ��J j=1(x2 j + y2 j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Estimates on random model We define the random vector L(σ, X) = � log |L1(σ, X)|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , log |LJ(σ, X)|, arg L1(σ, X), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , arg LJ(σ, X) � for σ > 1 2, where each Lj(σ, X) is defined by the product (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) Lj(σ, X) = � p d � i=1 � 1 − αj,i(p)X(p) pσ �−1 and {X(p)}p is a sequence of independent random variables, indexed by the prime numbers, and uniformly distributed on the unit circle {z ∈ C : |z| = 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The product converges almost surely for σ > 1 2 by Kolmogorov’s three series theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Define a probability measure (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2) Φrand T (B) := P(L(σT, X) ∈ B) for a Borel set B in R2J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By [4, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3] we have ΦT (RT) = Φrand T (RT ) + O((log T)(θ−1)/2 log log T) for 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It means that the distribution of L(σT + it) is well approximated by the distribution of its random model L(σT , X) when 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 is an immediate consequence of the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assume assumptions A1–A5 for L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then there exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that Φrand T (RT) = � K(k+l)≤ǫ log log T bk,l J� j=1 1 � ψj,T kj+ℓj × J � j=1 � � bj aj e−πu2Hkj(√πu)du � dj cj e−πv2Hℓj(√πv)dv � + O � 1 (log T)κ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) 0 ) for some δ0 > 0 and all k, l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' In [4, Section 7] we find that the measure Φrand T is absolutely continuous and it has a density function HT(u, v) such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3) Φrand T (RT) = �� RT HT(u, v)dudv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Hence, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3) and the following proposition, which upgrades [4, Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 6 YOONBOK LEE Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let 0 < θ < 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assume assumptions A1–A5 for L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , LJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' There exist constants ǫ, κ > 0 and a sequence {bk,l} of real numbers such that HT(u, v) = � K(k+l)≤ǫ log log T bk,l J� j=1 1 π � ψj,T kj+ℓj+2e − u2 j +v2 j ψj,T Hkj � uj � ψj,T � Hℓj � vj � ψj,T � + O � 1 (log T)κ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, b0,0 = 1, bk,l = 0 if K(k + l) = 1 and bk+l = O(δ−K(k+l) 0 ) for some δ0 > 0 and all k, l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' To prove Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2, it requires to understand the Fourier transform �Φrand T (x, y) := � R2J e2πi(x·u+y·v)dΦrand T (u, v) for x, y ∈ RJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By the definition of Φrand T in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2), we have �Φrand T (x, y) = E � exp � 2πi J � j=1 � xj log |Lj(σT , X)| + yj arg Lj(σT, X) � �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By assumptions A1 and A5 we see that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) βLj(pk) = 1 k d � i=1 αj,i(p)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) we have log Lj(σ, X) = � p ∞ � k=1 βLj(pk)X(p)k pkσ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Define (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) gj,p(σ) := ∞ � k=1 βLj(pk)X(p)k pkσ , then we have (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6) �Φrand T (x, y) = � p ϕp,σT (x, y), where ϕp,σ(x, y) := E � exp � 2πi J � j=1 � xjRe (gj,p(σ)) + yjIm (gj,p(σ)) � �� for each prime p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let z = (z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , zJ) = x + iy, then we find that ϕp,σ(x, y) = E � J� j=1 eπizjgj,p(σ)eπizjgj,p(σ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 7 By expanding the 2J exponential functions into power series we obtain ϕp,σ(x, y) = � k,l∈(Z≥0)J (πi)K(k+l)zkzl k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' E � J� j=1 gj,p(σ)kjgj,p(σ) ℓj � with notations for vectors in the end of Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It is easy to see that the expectation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='7) Ap,σ(k, l) := E � J� j=1 gj,p(σ)kjgj,p(σ) ℓj � satisfies Ap,σ(0, 0) = 1 and Ap,σ(0, k) = Ap,σ(k, 0) = 0 for k ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, we obtain (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='8) ϕp,σ(x, y) = 1 + Rp,σ(z), where (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9) Rp,σ(z) := � k̸=0 � l̸=0 (πi)K(k+l)zkzl k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Ap,σ(k, l).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Hence, by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='8) we have (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10) �Φrand T (x, y) = � p (1 + Rp,σT (z)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' To compute the product in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10), it requires the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' There exists a constant δ1 > 0 such that |Rp,σT (z)| ≤ 1 2 for every prime p and ||z|| ≤ δ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 for a proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3 we have �Φrand T (x, y) = exp � � p log(1 + Rp,σT (z)) � = exp � � p ∞ � m=1 (−1)m−1 m Rp,σT (z)m � (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='11) for ||z|| ≤ δ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9) the sum � p �∞ m=1 (−1)m−1 m Rp,σ(z)m has a power series represen- tation in z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , zJ, z1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , zJ, so let Bσ(k, l) be the coefficients such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='12) � k̸=0 � l̸=0 Bσ(k, l)zkzl = � p ∞ � m=1 (−1)m−1 m Rp,σ(z)m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Define In,σ(z) for each n ≥ 2 by the sum of the degree n terms in the above sum, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=', (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='13) In,σ(z) := � k,l̸=0 K(k+l)=n Bσ(k, l)zkzl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 8 YOONBOK LEE We see that In,σ(z) is a homogeneous polynomial in x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ of degree n, and that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='14) �Φrand T (x, y) = exp � ∞ � n=2 In,σT (z) � for ||z|| ≤ δ1 by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='11)–(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We find an asymptotic formula for In,σT (z) as T → ∞ in the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' There are complex numbers Cj1,j2 such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='15) I2,σT (z) = −π2 J � j=1 ψj,T|zj|2 + J � j1,j2=1 Cj1,j2zj1zj2 + O �log log T (log T)θ � for ||z|| ≤ δ1, where ψj,T is defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) and Cj1,j2 = Cj2,j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' For n ≥ 3, there is a constant C = CJ,d,η > 0 such that |In,σ(z)| ≤ Cn||z||n for σ ≥ 1 2 and |In,σT (z) − In,1/2(z)| ≤ Cn||z||n (log T)θ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 for a proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Define (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='16) QT(z) := −π2 J � j=1 ψj,T|zj|2, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='17) I2(z) := J � j1,j2=1 Cj1,j2zj1zj2 and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='18) In(z) := In,1/2(z) for n > 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='17) and the Cauchy-Schwarz inequality we obtain |I2(z)| ≤ J(max j1,j2 |Cj1,j2|)||z||2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By this inequality, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='18) and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4 we have (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='19) |In(z)| ≤ 2−n for n ≥ 2 and ||z|| ≤ δ2, where (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='20) δ2 := min � δ1, 1 2C , 1 2 � J maxj1,j2 |Cj1,j2| � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 9 It follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='14), Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4 and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='16)–(2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='19) that �Φrand T (x, y) = exp � QT(z) + ∞ � n=2 In(z) + O �log log T (log T)θ �� = eQT (z) � ∞ � r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � ∞ � n=2 In(z) �r + O �log log T (log T)θ �� (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='21) for ||z|| ≤ δ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Note that each In(z) is a homogeneous polynomial in x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ of degree n and does not depend on T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since the sum �∞ r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � �∞ n=2 In(z) �r is a power series in x and y, we let {bk,l} be a sequence of complex numbers such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) G(x, y) := � k,l (2πi)K(k+l)bk,lxkyl = ∞ � r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � ∞ � n=2 In(z) �r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then the bk,l satisfy the following properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let δ3 be a constant satisfying 0 < δ3 < π √ J δ2, then bk,l is a real number and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='23) |bk,l| ≤ √e δK(k+l) 3 for every k, l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' In particular, b0,0 = 1 and bk,l = 0 if K(k + l) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3 for a proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The infinite sum over k, l in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) can be approximated by its partial sum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We shall prove a quantitative version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let ǫ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='19) we have ���� � K(k+l)>ǫ log log T (2πi)K(k+l)bk,lxkyl ���� ≤ ∞ � r=1 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � n1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=',nr≥2 n1+···+nr>ǫ log log T �1 2 �n1+···+nr ≤ ∞ � r=1 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � m>ǫ log log T 1 2m � n1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=',nr≥2 n1+···+nr=m 1 for ||z|| ≤ δ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We substitute nj by n′ j + 2 for j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , r in the last sum, then the last sum equals to the number of nonnegative integers n′ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , n′ r such that n′ 1 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' + n′ r = m − 2r, which equals to �m−r−1 r−1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, the above sum is ≤ ∞ � r=1 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � m>ǫ log log T 1 2m �m − r − 1 r − 1 � ≤ ∞ � r=1 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � m>ǫ log log T 1 2m mr−1 (r − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' ≤ � m>ǫ log log T 1 2m ∞ � n=0 mn (n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' )2 ≤ � m>ǫ log log T 1 2m � ∞ � n=0 √mn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' �2 = � m>ǫ log log T e2√m 2m ≤ � m>ǫ log log T �2 3 �m ≤ 3 �2 3 �ǫ log log T ≪ 1 (log T)κ 10 YOONBOK LEE with a constant κ ≤ ǫ log 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It follows from these estimates, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='21), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5 we obtain the following proposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let δ2 be the constant defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let κ and ǫ be constants such that 0 < κ < θ and κ ≤ ǫ log 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let {bk,l} be a sequence of real numbers defined by its generating series (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then �Φrand T (x, y) = eQT (z) � � K(k+l)≤ǫ log log T (2πi)K(k+l)bk,lxkyl + O � 1 (log T)κ �� holds for ||z|| ≤ δ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We are ready to prove Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The density function HT(u, v) of the measure Φrand T is the inverse Fourier transform of �Φrand T , so that HT(u, v) = � RJ � RJ �Φrand T (x, y)e−2πi(x·u+y·v)dxdy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let δ4 be a constant such that 0 < δ4 ≤ min{δ2, δ3 4π}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1 and (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='14) in [4] we find that HT(u, v) = �� ||z||≤δ4 �Φrand T (x, y)e−2πi(x·u+y·v)dxdy + O � 1 (log T)κ � for some κ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See the proof of [4, Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4] for a detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6 we have HT(u, v) = � K(k+l)≤ǫ log log T (2πi)K(k+l)bk,l �� ||z||≤δ4 eQT (z)−2πi(x·u+y·v)xkyldxdy+O � 1 (log T)κ � for some ǫ, κ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let ξmin = minj≤J ξj > 0, then we have ���� �� ||z||≥δ4 eQT (z)−2πi(x·u+y·v)xkyldxdy ���� ≤ �� ||z||≥δ4 e−π2ξminθ log log T||z||2||z||K(k+l)dxdy ≪ � ∞ δ4 e−(π2ξminθ log log T)r2rK(k+l)+2J−1dr ≪ 1 (π2ξminθ log log T) K(k+l) 2 +J � ∞ πδ4 √ξminθ log log T e−r2rK(k+l)+2J−1dr by the change of variables to the polar coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By the Cauchy-Schwarz inequality we have � ∞ X e−r2rMdr ≤ �� ∞ X e−r2rdr � ∞ 0 e−r2r2M−1dr = � (M − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 2 e− 1 2 X2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Hence, it follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5 and the above estimations that HT(u, v) = � K(k+l)≤ǫ log log T (2πi)K(k+l)bk,l � RJ � RJ eQT (z)−2πi(x·u+y·v)xkyldxdy CENTRAL LIMIT THEOREM OF L-FUNCTIONS 11 + O � 1 (log T) 1 2π2δ2 4ξminθ � K(k+l)≤ǫ log log T �2π δ3 �K(k+l) � (K(k + l) + 2J − 2)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' (π2ξminθ log log T) K(k+l) 2 +J � + O � 1 (log T)κ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By Stirling’s formula the k, l-sum in the above O-term is ≪ � K(k+l)≤ǫ log log T �2π δ3 �K(k+l) 1 (π2ξminθ log log T) K(k+l) 2 +J �2ǫ log log T e � K(k+l) 2 +J− 3 4 ≪ � k,l � 2 √ 2ǫ δ3 √ξminθe �K(k+l) ≤ � k,l �1 2 �K(k+l) = 22J, provided that 0 < ǫ ≤ 1 32δ2 3ξminθe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' With this choice of ǫ, we have HT(u, v) = � K(k+l)≤ǫ log log T (2πi)K(k+l)bk,l � RJ � RJ eQT (z)−2πi(x·u+y·v)xkyldxdy + O � 1 (log T)κ � for some κ > 0 It remains to calculate the above integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We first write it as repeated integrals � RJ � RJ eQT (z)−2πi(x·u+y·v)xkyldxdy = J � j=1 � R � R e−ψj,T π2(x2 j+y2 j )−2πi(xjuj+yjvj)x kj j y ℓj j dxjdyj = J � j=1 � R e−ψj,T π2x2 j−2πixjujx kj j dxj � R e−ψj,T π2y2 j −2πiyjvjy ℓj j dyj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Each integral can be written in terms of the Hermite polynomials defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since � R e−ψπ2x2−2πixuxkdx = 1 (−2πi)k dk duk � R e−ψπ2x2−2πixudx = 1 (−2πi)k dk duk 1 √πψe− u2 ψ = 1 (2πi)k√π√ψ k+1e− u2 ψ Hk � u √ψ � , we have � RJ � RJ eQT (z)−2πi(x·u+y·v)xkyldxdy 12 YOONBOK LEE = J� j=1 1 π(2πi)kj+ℓj� ψj,T kj+ℓj+2e − u2 j +v2 j ψj,T Hkj � uj � ψj,T � Hℓj � vj � ψj,T � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, we have HT(u, v) = � K(k+l)≤ǫ log log T bk,l J� j=1 1 π � ψj,T kj+ℓj+2e − u2 j +v2 j ψj,T Hkj � uj � ψj,T � Hℓj � vj � ψj,T � + O � 1 (log T)κ � for some ǫ, κ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' This completes the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Proofs of lemmas We prove Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3 in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4 in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2 and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5 in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' In the proofs, we need the inequalities (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) |βLj(pk)| ≤ d kpkη for k ≥ 1, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2) |βLj(pk)| ≤ 1 k d � i=1 |αj,i(p)|k ≤ p(k−2)η k d � i=1 |αj,i(p)|2 for k ≥ 2 and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3) |βLj(p)|2 ≤ � d � i=1 |αj,i(p)| �2 ≤ d d � i=1 |αj,i(p)|2, which follows by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) and assumpion A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) there is a constant C1 := C1,d,η > 0 such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) |gj,p(σT)| ≤ ∞ � k=1 d k pkη p k 2 ≤ C1 p 1 2 −η for every prime p and j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='7), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4) we obtain |Rp,σT (z)| ≤ � k̸=0 � l̸=0 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � π||z|| C1 p 1 2−η �K(k+l) = � exp � J πC1||z|| p 1 2−η � − 1 �2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, there exists a constant C2 := C2,d,J,η > 0 such that |Rp,σT (z)| ≤ C2 p1−2η ||z||2 ≤ C2 21−2η ||z||2 for ||z|| ≤ 1 and every prime p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Therefore, there exists a constant δ1 > 0 such that |Rp,σT (z)| ≤ 1 2 for ||z|| ≤ δ1 and every prime p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We first find an useful expression (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) In,σ(z) = (πi)n � 1≤m≤n/2 (−1)m−1 m � k1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=',km,l1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=',lm̸=0 K(k1+···+km+l1+···+lm)=n zk1+···+kmzl1+···+lm k1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · ·km!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · ·lm!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' × � p Ap,σ(k1, l1) · · ·Ap,σ(km, lm) by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='12) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Here, the sum over m is 1 ≤ m ≤ n/2 because n = K(k1 + · · · + km + l1 + · · · + lm) ≥ 2m for k1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , km, l1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , lm ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The asymptotic (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='15) of I2,σT (z) is known before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' See (7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='16) of [4, Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We next prove (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6) Cj1,j2 = Cj2,j1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We have (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='7) Ap,σ(k, l) = Ap,σ(l, k) by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) we also have (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='8) I2,σ(z) = I2,σ(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' So we obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='6) by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='15) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' For the case n > 2, we observe that Ap,σ(k, l) for a real σ can be extended to an analytic function in a complex variable s via (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9) Ap,s(k, l) = E � J� j=1 � ∞ � k=1 βLj(pk)X(p)k pks �kj� ∞ � k=1 βLj(pk)X(p)k pks �ℓj� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' This observation essentially leads us to prove the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let η be the constant in assumption A1 and assume K(k1 + · · · + km + l1 + · · · + lm) = n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The Dirichlet series f(s) := � p Ap,s(k1, l1) · · ·Ap,s(km, lm) is absolutely convergent for Re(s) ≥ 5+2η 12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Moreover, there exists a constant C3 = C3,J,d,η > 0 such that |f(s)| ≤ Cn 3 for Re(s) ≥ 5+2η 12 and |f(σT) − f( 1 2)| ≤ Cn 3 (log T)θ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 14 YOONBOK LEE Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We first show that there is a constant C4 > 0 such that |f(s)| ≤ Cn 4 for Re(s) ≥ 5+2η 12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='9) we find that |Ap,s(k, l)| ≤ � ∞ � k=1 maxj≤J |βLj(pk)| pkRe(s) �K(k+l) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, we have |f(s)| ≤ � p � ∞ � k=1 maxj≤J |βLj(pk)| pkRe(s) �n ≤ 2n � p �maxj≤J |βLj(p)| pRe(s) �n + 2n � p � ∞ � k=2 maxj≤J |βLj(pk)| pkRe(s) �n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10) The first sum on the right hand side of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10) is � p � maxj≤J |βLj(p)| �n pnRe(s) ≤ � p (dpη)n−2� maxj≤J d �d i=1 |αj,i(p)|2� pnRe(s) ≤ dn−1 � p �J j=1 �d i=1 |αj,i(p)|2 p1+ε ≤ Cn 5 for Re(s) ≥ 5+2η 12 by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3), where ε = 1 4 − η 2 > 0 and C5 := max � d, � p �J j=1 �d i=1 |αj,i(p)|2 p1+ε � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Note that the last p-sum is convergent by assumption A3 and a partial summation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' The second sum on the right hand side of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10) is � p � ∞ � k=2 maxj≤J |βLj(pk)| pkRe(s) �n ≤ � p � ∞ � k=2 maxj≤J �d i=1 |αj,i(p)|2 kpkRe(s)−(k−2)η �n ≤ � p �maxj≤J �d i=1 |αj,i(p)|2 p2Re(s) 1 2 1 1 − 1 pRe(s)−η �n ≤ �1 2 1 1 − 1 p 5 12 (1−2η) �n � p (dp2η)n−1 maxj≤J �d i=1 |αj,i(p)|2 p2nRe(s) ≤ �1 2 1 1 − 1 p 5 12 (1−2η) �n dn−1 � p �J j=1 �d i=1 |αj,i(p)|2 p1+6ε ≤ Cn 6 CENTRAL LIMIT THEOREM OF L-FUNCTIONS 15 for Re(s) ≥ 5+2η 12 by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2), where C6 := 1 2 1 1 − 1 p 5 12 (1−2η) max � d, � p �J j=1 �d i=1 |αj,i(p)|2 p1+6ε � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We choose C4 = 2(C5 + C6), then we have (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='11) |f(s)| ≤ Cn 4 for Re(s) ≥ 5+2η 12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' One can easily see in the above estimations that f(s) is absolutely convergent for Re(s) ≥ 5+2η 12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let ε1 = 1 2 − 5+2η 12 > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since f(σT) − f( 1 2) = � σT 1/2 f ′(u)du = � σT 1/2 1 2πi � |z−u|=ε1 f(z) (z − u)2dzdu, we obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='12) |f(σT) − f( 1 2)| ≤ (σT − 1 2) 1 ε1 sup Re(z)≥ 1 2 −ε1 |f(z)| ≤ Cn 4 ε1(log T)θ by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let C3 = C4/ε1 > C4, then (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='11) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='12) imply both inequalities in the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' □ Therefore by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) and Stirling’s formula we have |In,σ(z)| ≤ ||z||n(πC3)n � m≤n/2 1 m � K(k1+···+km+l1+···+lm)=n 1 k1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · · km!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · ·lm!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' = ||z||n(πC3)n � m≤n/2 1 m (2mJ)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' ≤ ||z||n(JπC3)nnn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' ≤ ||z||n(JπC3e)n for σ ≥ 5+2η 12 and n > 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Similarly, we have |In,σT (z) − In,1/2(z)| ≤ ||z||n(JπC3e)n (log T)θ for n > 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Therefore, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='4 holds with a constant (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='13) C = JπC3e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We first consider G(x, y) in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) as a function in complex variables x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' We replace xj by xj 2πi and yj by yj 2πi for j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , J in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22), then we obtain that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='14) � k,l bk,lxkyl = ∞ � r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � ∞ � n=2 In(z)(2πi)−n �r .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 16 YOONBOK LEE Now we consider x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ as real variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' By (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='7) we have In,σ(z)(2πi)−n = In,σ(z)(2πi)−n, which implies that In,σ(z)(2πi)−n is a polynomial in real variables x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ with real coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since In(z)(2πi)−n is also a homogeneous polynomial in x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ of degree n with real coefficients, we obtain by comparing coefficients in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='14) that bk,l ∈ R, b0,0 = 1 and bk,l = 0 for K(k + l) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' It remains to prove the inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Again we consider G(x, y) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='22) as an analytic function in complex variables x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , xJ, y1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , yJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Assume that sup{|x1|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , |xJ|, |y1|, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' , |yJ|} ≤ δ2 2 √ J .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Then we see that |I2(z)| ≤ J � j1,j2=1 |Cj1,j2| δ2 2 4J ≤ 1 16 by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='17) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' For n ≥ 3 we have |In(z)| ≤ �δ2πC3 √ J �n � m≤n/2 1 m � K(k1+···+km+l1+···+lm)=n 1 k1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · ·km!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='l1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' · · ·lm!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' ≤ (δ2 √ JπC3e)n ≤ (δ2C)n ≤ 2−n by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='18), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='20), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='5), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='13) and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Thus, |G(x, y)| ≤ ∞ � r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' � ∞ � n=2 |In(z)| �r ≤ ∞ � r=0 1 r!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='2−r = √e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Let 0 < δ3 2π = δ′ 3 < δ2 2 √ J .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Since bk,l = 1 (2πi)K(k+l)+2J � |x1|=δ′ 3 · · � |xJ|=δ′ 3 � |y1|=δ′ 3 · · � |yJ|=δ′ 3 G(x, y) xkyl dyJ yJ · · dy1 y1 dxJ xJ · · dx1 x1 by Cauchy’s integral formula, we obtain |bk,l| ≤ √e (2πδ′ 3)K(k+l) = √e δK(k+l) 3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Acknowledgements This work has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 2019R1F1A1050795).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' CENTRAL LIMIT THEOREM OF L-FUNCTIONS 17 References [1] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Cram´er, Random variables and probability distributions, 3rd edition, Cambridge University Press, 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Ha and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lee, The a-values of the Riemann zeta function near the critical line, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 464, (2018), 838–863.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Hejhal, On Euler products and multi-variate Gaussians, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Paris, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' I 337 (2003), 223–226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lamzouri and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lee, The number of zeros of linear combinations of L-functions near the critical line, to appear J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Preprint available at arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='10490.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [5] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Lee, An asymptotic expansion of Selberg’s central limit theorem near the critical line, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Number Theory 236 (2022), 323–333.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Radziwi�l�l and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Soundararajan, Selberg’s central limit theorem for log |ζ( 1 2 + it)|, Enseign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' 63 (2017), 1–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Selberg, Old and new conjectures and results about a class of Dirichlet series, Bombieri, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' (ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=') et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=', Proceedings of the Amalfi conference on analytic number theory, held at Maiori, Amalfi, Italy, from 25 to 29 September, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Salerno: Universit´a di Salerno, 367–385 (1992) = Collected Papers, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' II, 47–63, Springer, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Tsang, The distribution of the values of the Riemann zeta-function, ProQuest LLC, Ann Arbor, MI, 1984, Thesis (Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=')-Princeton University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content=' Department of Mathematics, Research Institute of Basic Sciences, Incheon Na- tional University, 119 Academy-ro, Yeonsu-gu, Incheon, 22012, Korea Email address: leeyb@inu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='kr, leeyb131@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} +page_content='com' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NAzT4oBgHgl3EQf9v7Q/content/2301.01925v1.pdf'} diff --git a/5NE6T4oBgHgl3EQflhFb/content/tmp_files/2301.06150v1.pdf.txt b/5NE6T4oBgHgl3EQflhFb/content/tmp_files/2301.06150v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b9d08467bb71812807c035e52c729f16f8e38d1c --- /dev/null +++ b/5NE6T4oBgHgl3EQflhFb/content/tmp_files/2301.06150v1.pdf.txt @@ -0,0 +1,4251 @@ +1 +Minimizing Age of Incorrect Information over +a Channel with Random Delay +Yutao Chen and Anthony Ephremides +Department of Electrical and Computer Engineering, University of Maryland +Abstract +We investigate a transmitter-receiver pair in a slotted-time system. The transmitter observes a +dynamic source and sends updates to a remote receiver through a communication channel. We assume +that the channel is error-free but suffers a random delay. We consider two more practical cases to +facilitate the analysis. In the first case, the update is guaranteed to be delivered within a certain number +of time slots. In the second case, once the transmission time exceeds a predetermined value, the update +is immediately discarded, leaving the channel free for a new transmission on demand. The receiver +will maintain an estimate of the current state of the dynamic source using the received updates. In this +paper, we adopt the Age of Incorrect Information (AoII) as the performance metric and investigate the +problem of optimizing the transmitter’s action in each time slot to minimize AoII. We first characterize +the optimization problem using the Markov decision process and investigate the performance of the +threshold policy, under which the transmitter transmits updates only when the AoII exceeds the threshold +τ. By delving into the characteristics of the system evolution, we precisely compute the expected AoII +achieved by the threshold policy using the Markov chain. Then, we prove that the optimal policy +exists and provide a computable relative value iteration algorithm to estimate the optimal policy. Next, +by leveraging the policy improvement theorem, we prove that, under an easy-to-verify condition, the +optimal policy is the threshold policy with τ = 1. Finally, numerical results are laid out to highlight +the performance of the optimal policy. +I. INTRODUCTION +Communication systems are used in all aspects of our lives and play an increasingly impor- +tant role. Consequently, communication systems are being asked to play more roles than just +disseminating words, sounds, and images. With the widespread deployment of communication +systems and the continuous expansion of their purposes, we have to demand higher performance +from the communication systems. Meanwhile, we wonder whether traditional metrics such as +January 18, 2023 +DRAFT +arXiv:2301.06150v1 [cs.IT] 15 Jan 2023 + +2 +throughput and latency could continue to meet such demands. One of the major drawbacks of +such traditional metrics is that they treat each update equally and ignore that not every update can +provide the receiver with equally important information for communication purposes. Because +of this, researchers seek to reconsider existing communication paradigms and look for new ones, +among which semantic communication is an important attempt. The semantics of information +is formally defined in [1] as the significance of the messages relative to the purpose of the data +exchange. Then, semantic communication is regarded as ”the provisioning of the right piece of +information to the right point of computation (or actuation) at the right point in time”. Different +from the classical metrics in data communication, semantic metrics incorporate the freshness +of information, which is becoming increasingly important as real-time monitoring systems are +ubiquitous in modern society. Typically in such systems, a monitor monitors one or more events +simultaneously and transmits updates to allow one or more receivers at a distance to have a +good knowledge of the events. Therefore, the timeliness of information is often one of the most +important performance indicators. The Age of Information (AoI), first introduced in [1], is one +of the most successful examples of capturing information freshness. AoI tracks the time elapsed +since the generation of the last received update, which results in different treatments for different +updates. For example, when the update is significantly fresher, it will be more important and +worth the extra resources to transmit. Let V (t) be the generation time of the last update received +up to time t. Then, AoI at time t is defined by ∆AoI(t) = t − V (t). After the introduction, +AoI has attracted extensive attention [2]–[5]. However, AoI assumes that the age of each update +always increases over time, ignoring the information content of the update. Such neglect is not +always desirable. For example, in a remote monitoring system, the updates that provide the +remote monitor with accurate information about the source process it is interested in should +be considered fresh, even if the update was generated earlier. This limitation leads to its poor +performance in the problem of remote estimation. For example, we want to estimate a rapidly +changing event remotely. In this case, a small AoI does not necessarily mean that the receiver has +accurate information about the event. Likewise, receivers can make relatively accurate estimates +without timely information when events change slowly. +Inspired by the above limitation, the Age of Incorrect Information (AoII) is introduced in [6], +which combines the timeliness of updates and the information content they convey. More specif- +ically, AoII combines the degree of information mismatch between the receiver and the source +and the aging process of mismatched information. According to the definition given in [6], +January 18, 2023 +DRAFT + +3 +AoII captures the aging process of conflicting information through a time penalty function that +quantifies the time elapsed since the last time the receiver has the perfect information about +the source. The mismatch between the receiver’s information and the source is captured by the +information penalty function, which quantifies the degree of information mismatch between the +two. Because of the flexibility of the penalty functions, AoII can be adapted to various systems +and communication objectives by choosing different penalty functions. +Since the introduction of AoII, many works have been done to reveal its fundamental nature +and performance in various communication systems. AoII minimization under resource con- +straints is investigated first. In [6], the authors investigate the minimization of AoII when there +is a limit on the average number of transmissions allowed. Then, the authors extend the results +to the case of the generic time penalty function in [7]. However, in both papers, the measure +of information mismatch is binary, either true or false. In [8], the authors investigate a similar +system setting, but the AoII considers the quantified information mismatch between the source +and the receiver. AoII in the context of scheduling is another critical problem. In scheduling +problems, a base station observes multiple events and needs to select a part of the users to +update. Under these general settings, [9] investigates the problem of minimizing AoII when +the channel state information is available and the time penalty function is generic. The authors +of [10] consider a similar system, but the base station cannot know the states of the events before +the transmission decision is made. In real-life applications, we usually have no knowledge of +the statistical model of the source process. Therefore, the authors in [11] investigate the problem +of minimizing AoII for an unknown Markovian source. The relationship between the estimation +error and AoII is studied in [12]. Moreover, a variant of AoII - Age of Incorrect Estimates is +introduced and studied in [13]. Communication channels usually suffer random delays due to +various influences in real-life applications. Under this system setup, the authors of [14] compare +the performances of AoII, AoI, and real-time error through extensive numerical simulations. +This paper considers a similar system setup, but we investigate the problem from a theoretical +perspective. We accurately calculate the expected AoII achieved by some canonical policies, +which enables us to solve the problem of minimizing AoII over a channel with random delay. +Communication channel with a random delay has also been studied in the context of remote +estimation and AoI [15]–[18]. However, the problem considered in this paper is very different, +as AoII is a combination of age-based metrics frameworks and error-based metrics frameworks. +The main contributions of this paper can be summarized as follows. 1) We investigate the +January 18, 2023 +DRAFT + +4 +AoII minimization problem in a system where the communication channel suffers a random delay +and characterize the optimization problem using the Markov decision process. 2) We analyze +the characteristics of the threshold policy, under which the transmitter initiates transmission only +when AoII exceeds the threshold, and calculate the expected AoII achieved by the threshold +policy precisely. 4) We prove the existence of the optimal policy and introduce a computable +value iteration algorithm to estimate the optimal policy. 5) We theoretically find the optimal +policy using the policy improvement theorem. +The remainder of this paper is organized in the following way. We introduce the system model +and the optimization problem in Section II. Then, Section III characterizes the problem using the +Markov decision process. In Section IV-C, we theoretically analyze and calculate the expected +AoII achieved by the threshold policy. Then, we show the existence of the optimal policy, provide +the value iteration algorithm to estimate the optimal policy, and theoretically find the optimal +policy using the policy improvement theorem in Section V. Finally, Section VI concludes the +paper with numerical results that highlight the performance of the optimal policy. +II. SYSTEM OVERVIEW +A. System Model +We consider a slotted-time system, where a transmitter observes a dynamic source and needs +to decide when to send updates to a remote receiver so that the receiver can have a good +knowledge of the current state of the dynamic source. The dynamic source is modeled by a +two-state symmetric Markov chain with state transition probability p. The transmitter receives +an update from the dynamic source at the beginning of each time slot. The update at time slot +k is denoted by Xk. The old update will be discarded upon the arrival of a new one. Then, +the transmitter will decide whether to transmit the new update based on the current system +status. When the channel is idle, the transmitter chooses between transmitting the new update +and staying idle. When the channel is busy, the transmitter cannot do anything other than stay +idle. The updates will be transmitted through an error-free communication channel that suffers a +random delay. In other words, the update will not be corrupted during the transmission, but each +transmission will take a random amount of time T ∈ N∗. We denote by pt ≜ Pr(T = t) the +probability mass function (PMF) and assume T is independent and identically distributed. When +a transmission finishes, the communication channel is immediately available for the subsequent +transmission. +January 18, 2023 +DRAFT + +5 +0 +1 +Dynamic Source +T ransmitter +(transmit or +stay idle) +Channel +Receiver +(estimate & +feedback) +Xk +Xk−T +ˆ +Xk +F eedback +p +p +1 − p +1 − p +Fig. 1: An illustration of the system model, where Xk and ˆXk are the state of the dynamic +source and the receiver’s estimate at time slot k, respectively. +The receiver maintains an estimate of the current state of the dynamic source and modifies its +estimate every time a new update is received. We denote by ˆXk the receiver’s estimate at time +slot k. According to [18], the best estimator when p ≤ 1 +2 is the last received update. When p > 1 +2, +the optimal estimator depends on the realization of transmission time. In this paper, we consider +only the case of p ≤ 1 +2. Hence, the receiver uses the last received update as the estimate. For the +case of p > 1 +2, the results can be extended using the corresponding best estimator. The receiver +uses ACK/NACK packets to inform the transmitter of its reception of the new update. As is +assumed in [6], the transmitter receives the ACK/NACK packets reliably and instantaneously +because the packets are generally very small compared to the size of the status updates. When +ACK is received, the transmitter knows that the receiver’s estimate changes to the last sent +update. When NACK is received, the transmitter knows that the receiver’s estimate does not +change. In this way, the transmitter always knows the current estimate on the receiver side. +A illustration of the system model is shown in Fig. 1. At the beginning of time slot k, +the transmitter receives the update Xk from the dynamic source. Then, the transmitter decides +whether to transmit this update based on the system status. When the transmitter decides not to +start transmission, it will stay idle. Otherwise, the transmitter will transmit the update through +the communication channel, where the transmission of the update takes a random amount of +time. Thus, the update received by the receiver has a delay of several time slots (i.e., Xk−T). +Then, the receiver will modify its estimation ˆ +Xk based on the received update and send an ACK +packet to inform the transmitter of its reception of the update. +B. Age of Incorrect Information +The system adopts the Age of Incorrect Information (AoII) as the performance metric. We +first define Uk as the last time slot up to time slot k in which the receiver’s estimate is correct. +January 18, 2023 +DRAFT + +6 +Mathematically, +Uk ≜ max{h : h ≤ k, Xh = ˆXh}. +Then, in a slotted-time system, AoII at time slot k can be written as +∆AoII(Xk, ˆXk, k) = +k +� +h=Uk+1 +� +g(Xh, ˆXh)F(h − Uk) +� +, +(1) +where g(Xk, ˆXk) is the information penalty function. F(k) ≜ f(k)−f(k −1) where f(k) is the +time penalty function. In this paper, we choose g(Xk, ˆXk) = |Xk − ˆXk| and f(k) = k. Hence, +F(k) = 1 and g(Xk, ˆXk) ∈ {0, 1} as the dynamic source has two states. Then, equation (1) can +be simplified as +∆AoII(Xk, ˆXk, k) = k − Uk ≜ ∆k. +We can easily conclude from the simplified expression that, under the chosen penalty functions, +AoII increases at the rate of 1 per time slot when the receiver’s estimate is incorrect. Otherwise, +AoII is 0. Next, we characterize the evolution of ∆k. To this end, we divide the evolution into +the following cases. +• When Xk+1 = ˆXk+1, we have Uk+1 = k + 1. Then, by definition, ∆k+1 = 0. +• When Xk+1 ̸= ˆXk+1, we have Uk+1 = Uk. Then, by definition, ∆k+1 = k+1−Uk = ∆k+1. +Combining together, we have +∆k+1 = 1{Xk+1 ̸= ˆXk+1}(∆k + 1), +(2) +where 1{A} is the indicator function, whose value is one when event A occurs and zero +otherwise. A sample path of ∆k is shown in Fig. 2. Now that the evolution of AoII has been +clarified, we further discuss the system’s evolution. +C. System Dynamics +In this subsection, we tackle the system dynamics, which plays a key role in later sections. +We notice that the system’s status at the beginning of time slot k can be fully captured by the +triplet sk ≜ (∆k, tk, ik) where tk ∈ N0 indicates the time the current transmission has been in +progress. We define tk = 0 if there is no transmission in progress. ik ∈ {−1, 0, 1} indicates the +state of the channel. We define ik = −1 when the channel is idle. ik = 0 if the channel is busy +and the transmitting update is the same as the receiver’s current estimate, and ik = 1 when the +transmitting update is different from the receiver’s current estimate. +January 18, 2023 +DRAFT + +7 +X1 = 1 +ˆ +X1 = 1 +X2 = 0 +ˆ +X2 = 1 +X3 = 1 +ˆ +X3 = 1 +X4 = 0 +ˆ +X4 = 1 +X5 = 0 +ˆ +X5 = 0 +X6 = 1 +ˆ +X6 = 0 +X7 = 1 +ˆ +X7 = 0 +X8 = 1 +ˆ +X8 = 0 +X9 = 1 +ˆ +X9 = 1 +X10 = 0 +ˆ +X10 = 1 +X11 = 0 +ˆ +X11 = 1 +1 +2 +3 +T1 +D1/T2 +D2 +T3 +D3 +t +∆t +Fig. 2: A sample path of ∆k, where Ti and Di are the transmission start time and the delivery time +of the i-th update, respectively. At T1, the transmitted update is X3. Note that the transmission +decisions in the plot are taken randomly. +Remark 1. According to the definitions of tk and ik, ik = −1 if and only if tk = 0. In this case, +the channel is idle. +Then, characterizing the system dynamics is equivalent to characterizing the value of sk+1 +using sk and the transmitter’s action. We denote the transmitter’s decision by ak ∈ {0, 1}. We +define ak = 0 when the transmitter decides not to initiate a transmission and ak = 1 otherwise. +Hence, the system dynamics can be fully characterized by Psk,sk+1(ak), which is defined as +the probability that action ak at sk leads to sk+1. We will revisit Psk,sk+1(ak) with an in-depth +discussion in future sections. +D. Problem Formulation +We define a policy φ as the one that specifies the transmitter’s decision in each time slot. This +paper aims to find the policy that minimizes the expected AoII of the system. Mathematically, +the problem can be formulated as the following optimization problem. +arg min +φ ∈ Φ +lim +K→∞ +1 +K Eφ +�K−1 +� +k=0 +∆k +� +, +(3) +where Eφ is the conditional expectation, given that policy φ is adopted, and Φ is the set of all +admissible policies. +January 18, 2023 +DRAFT + +8 +Definition 1 (Optimal policy). A policy is said to be optimal if it yields the minimal expected +AoII. +In the next section, we will characterize the problem reported in (3) using a Markov Decision +Process (MDP). +III. MARKOV DECISION PROCESS CHARACTERIZATION +The minimization problem reported in (3) can be characterized by an infinite horizon with +average cost MDP M, which consists of the following components. +• The state space S. The state s = (∆, t, i) is the triplet defined in Section II-C without the +time stamp. For the remainder of this paper, we will use s and (∆, t, i) to represent the +state interchangeably. +• The action space A. When i = −1, the feasible action is a ∈ {0, 1} where a = 0 if the +transmitter decides not to initiate a new transmission and a = 1 otherwise. When i ̸= −1, +the feasible action is a = 0. +• The state transition probability P. The probability that action a at state s leads to state s′ +is denoted by Ps,s′(a), whose value will be discussed in the following subsection. +• The immediate cost C. The immediate cost for being at state s is C(s) = ∆. +Let V (s) be the value function of state s ∈ S. It is well known that the value function satisfies +the Bellman equation [19]. +V (s) + θ = min +a∈A +� +C(s) + +� +s′∈S +Ps,s′(a)V (s′) +� +, +s ∈ S, +(4) +where θ is the expected AoII achieved by the optimal policy. We will write V (s) as V (∆, t, i) +in some parts of this paper to better distinguish between states. The state transition probability +is essential for solving the Bellman equation. Hence, we delve into Ps,s′(a) in the following +subsection. +A. State Transition Probability +We recall that Ps,s′(a) is the probability that action a at state s will lead to state s′. To make +it easier to follow, we first characterize separately the transitions of the three elements that make +up the state s. +January 18, 2023 +DRAFT + +9 +• ∆′ can be 0 or ∆ + 1, depending on whether the receiver’s estimate at state s′ is correct. +The specific evolution is given by (2). +• t′ can be t + 1 or 0, depending on whether there is a transmission in progress at state s′. +• i′ = −1 if and only if t′ = 0. Other than that, i′ can be 0 or 1, depending on whether the +transmitting update is the same as the receiver’s estimate at state s′. +With the individual transitions, we proceed to discuss their combined transitions and the cor- +responding probabilities. To this end, we define Pr(T > k + 1 | t) as the probability that the +current transmission will take more than t + 1 time slots, given that the current transmission has +been in progress for t time slots. Hence, +Pr(T > t + 1 | t) = 1 − Pr(T ≤ t + 1) +Pr(T > t) += 1 − Pt+1 +1 − Pt +, +where Pt ≜ �t +k=1 pk. Leveraging the individual transitions and Pr(T > t + 1 | t), Ps,s′(a) can +be obtained easily. For the sake of space, the complete state transition probabilities are provided +in Appendix A. +We notice that we do not impose any restrictions on the update transmission time, which would +make the theoretical analysis very difficult and would also lead to long channel occupancy by a +single update. Therefore, in order to ease the theoretical analysis and to be closer to the practice, +we consider the following two independent assumptions1. +• Assumption 1: We assume that the update will always be delivered and the transmission +lasts for at most tmax time slots. More precisely, we assume 1 ≤ T ≤ tmax and +tmax +� +t=1 +pt = 1, +pt ≥ 0, 1 ≤ t ≤ tmax. +In practice, we can make the probability of the transmission time exceeding tmax negligible +by choosing a sufficiently large tmax. +• Assumption 2: We assume the transmission can last for a maximum of tmax time slots. +At the end of the tmaxth time slot, the update will be discarded if not delivered, and the +channel will be available for a new transmission immediately. We define pt+ ≜ �∞ +t=tmax+1 pt +as the probability that the update will be discarded. In practice, similar techniques, such as +time-to-live (TTL) [20], are used to prevent an update from occupying the channel for too +long. +1The results presented in this paper apply to both assumptions unless stated otherwise. +January 18, 2023 +DRAFT + +10 +Remark 2. tmax is a predetermined system parameter and is not a parameter to be optimized. +When tmax = 1, the system reduces to the one considered in [6], according to which the optimal +policy is to transmit a new update whenever possible. Therefore, in the rest of this paper, we +focus on the case of tmax > 1. +Under both assumptions, the transmission will last at most tmax time slots, and the channel will +be immediately available for a new transmission when the current transmission finishes. Hence, +the state space S is reduced as t is now bounded by 1 ≤ t ≤ tmax − 1. Moreover, the state +transition probabilities in Appendix A will be adjusted as follows. +• Under Assumption 1, updates are bound to be delivered after tmax time slots. Hence, +Pr(T > t + 1 | t) = 0 for t ≥ tmax − 1. +• Under Assumption 2, updates will be discarded at the end of the tmaxth time slot if not +delivered. Hence, s′ = (∆′, tmax, i′) will be replaced by s′ = (∆′, 0, −1). +Having clarified the state transition probabilities, we evaluate a canonical policy in terms of the +achieved expected AoII in the next section. +IV. POLICY PERFORMANCE ANALYSIS +As is proved in [6]–[8], the AoII-optimal policy often has a threshold structure. Hence, we +consider the threshold policy. +Definition 2 (Threshold policy). Under threshold policy τ, the transmitter will initiate a trans- +mission only when the current AoII is no less than threshold τ ∈ N0 and the channel is idle. +Let aτ(s) be the action at state s suggested by the threshold policy τ. Then, +aτ(s) = 1{∆ ≥ τ and i = −1}. +Remark 3. We define τ ≜ ∞ as the policy under which the transmitter never initiates any +transmissions. +We notice that the system dynamics under threshold policy can be characterized by a discrete- +time Markov chain (DTMC). Without loss of generality, we assume the DTMC starts at state +(0, 0, −1). Then, the state space of the Markov chain SMC consists of all the states accessible +from state (0, 0, −1). Since state (0, 0, −1) is positive recurrent and communicates with each +January 18, 2023 +DRAFT + +11 +state s ∈ SMC, the stationary distribution exists. Let πs be the steady-state probability of state +s. Then, πs satisfies the following balance equation. +πs = +� +s′∈SMC +Ps′,s(a)πs′, +s ∈ SMC, +where Ps′,s(a) is the single-step state transition probability as define in Section III, and the +action a depends on the threshold policy. Then, the first step in calculating the expected AoII +achieved by the threshold policy is to calculate the stationary distribution of the induced DTMC. +However, the problem arises as the state space SMC is infinite and intertwined. To simplify the +state transitions, we recall that the transmitter can only stay idle (i.e., a = 0) when the channel +is busy. Let SMC +−1 += {s = (∆, t, i) : i ̸= −1} be the set of the state where the channel is busy. +Then, for s′ ∈ SMC +−1 , Ps′,s(a) = Ps′,s(0) and is independent of the threshold policy. Hence, for +any threshold policy and each s ∈ S \ SMC +−1 , we can repeatedly replace πs′, where s′ ∈ SMC +−1 , +with the corresponding balance equation until we get the following equation. +πs = +� +s′∈S\SMC +−1 +P∆′,∆(a)πs′, +s ∈ S \ SMC +−1 , +(5) +where P∆′,∆(a) is the multi-step state transition probability from state s′ = (∆′, 0, −1) to state +s = (∆, 0, −1) under action a. For simplicity, we write (5) as +π∆ = +� +∆′≥0 +P∆′,∆(a)π∆′, +∆ ≥ 0. +(6) +As we will see in the following subsections, π∆ is sufficient to calculate the expected AoII +obtained by any threshold policy. In the next subsection, we derive the expression of P∆,∆′(a). +A. Multi-step State Transition Probability +We start with the case of a = 0. In this case, no update will be transmitted, and P∆,∆′(0) is +independent of the transmission delay. Then, according to Appendix A, +P0,∆′(0) = +� +� +� +1 − p +∆′ = 0, +p +∆′ = 1, +and for ∆ > 0, +P∆,∆′(0) = +� +� +� +p +∆′ = 0, +1 − p +∆′ = ∆ + 1. +January 18, 2023 +DRAFT + +12 +In the sequel, we focus on the case of a = 1. We define P t +∆,∆′(a) as the probability that action +a at state s = (∆, 0, −1) will lead to state s′ = (∆′, 0, −1), given that the transmission takes t +time slots. Then, under Assumption 1, +P∆,∆′(1) = +tmax +� +t=1 +ptP t +∆,∆′(1). +Hence, it is sufficient to obtain the expressions of P t +∆,∆′(1). To this end, we define p(t) as the +probability that the dynamic source will remain in the same state after t time slots. Since the +Markov chain is symmetric, p(t) is independent of the state and can be calculated by +p(t) = +� +� +� +�1 − p +p +p +1 − p +� +� +t� +� +11 +, +where the subscript indicates the row number and the column number of the target probability. +For the consistency of notation, we define p(0) ≜ 1. Then, we have the following lemma. +Lemma 1. Under Assumption 1, +P∆,∆′(1) = +tmax +� +t=1 +ptP t +∆,∆′(1), +(7) +where +P t +0,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +p(t−k)p(1 − p)k−1 +1 ≤ ∆′ = k ≤ t, +0 +otherwise, +and for ∆ > 0, +P t +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +(1 − p(t−1))(1 − p) +∆′ = 1, +(1 − p(t−k))p2(1 − p)k−2 +2 ≤ ∆′ = k ≤ t − 1, +p(1 − p)t−1 +∆′ = ∆ + t, +0 +otherwise. +Under Assumption 1, equation (7) can be written equivalently as +P∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +tmax +� +t=∆′ +ptP t +∆,∆′(1) +0 ≤ ∆′ ≤ tmax − 1, ∆ ≥ ∆′, +tmax +� +t=∆′ +ptP t +∆,∆′(1) + pt′P t′ +∆,∆′(1) +0 ≤ ∆′ ≤ tmax − 1, ∆ < ∆′, +pt′P t′ +∆,∆′(1) +∆′ ≥ tmax, +January 18, 2023 +DRAFT + +13 +where t′ ≜ ∆′ − ∆ and P t′ +∆,∆′(1) ≜ 0 when t′ ≤ 0 or when t′ > tmax. Meanwhile, P∆,∆′(1) +possesses the following properties. +1) P∆,∆′(1) is independent of ∆ when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ ∆′. +2) P∆,∆′(1) = P∆+δ,∆′+δ(1) when ∆′ ≥ tmax and ∆ ≥ 0 for any δ ≥ 1. +3) P∆,∆′(1) = 0 when ∆′ > ∆ + tmax or when tmax − 1 < ∆′ < ∆ + 1. +Proof. The expression of P t +∆,∆′(1) is obtained by analyzing the system dynamics. The complete +proof can be found in Appendix B. +The state transition probabilities under Assumption 2 can be obtained similarly. To this end, +we define P t+ +∆,∆′(a) as the probability that action a at state s = (∆, 0, −1) will result in state +s′ = (∆′, 0, −1), given that the transmission is terminated. Then, we have the following lemma. +Lemma 2. Under Assumption 2, +P∆,∆′(1) = +tmax +� +t=1 +ptP t +∆,∆′(1) + pt+P t+ +∆,∆′(1), +(8) +where +P t +0,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +p(t−k)p(1 − p)k−1 +1 ≤ ∆′ = k ≤ t, +0 +otherwise, +P t+ +0,∆′(1) = P tmax +0,∆′ (1), +and for ∆ > 0, +P t +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +(1 − p(t−1))(1 − p) +∆′ = 1, +(1 − p(t−k))p2(1 − p)k−2 +2 ≤ ∆′ = k ≤ t − 1, +p(1 − p)t−1 +∆′ = ∆ + t, +0 +otherwise, +P t+ +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +1 − p(tmax) +∆′ = 0, +(1 − p(tmax−k))p(1 − p)k−1 +1 ≤ ∆′ = k ≤ tmax − 1, +(1 − p)tmax +∆′ = ∆ + tmax, +0 +otherwise. +January 18, 2023 +DRAFT + +14 +Under Assumption 2, equation (8) can be written equivalently as +P∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +tmax +� +t=∆′ +ptP t +∆,∆′(1) + pt+P t+ +∆,∆′(1) +0 ≤ ∆′ ≤ tmax − 1, ∆ ≥ ∆′, +tmax +� +t=∆′ +ptP t +∆,∆′(1) + pt′P t′ +∆,∆′(1) + pt+P t+ +∆,∆′(1) +0 ≤ ∆′ ≤ tmax − 1, ∆ < ∆′, +pt′P t′ +∆,∆′(1) + pt+P t+ +∆,∆′(1) +∆′ ≥ tmax, +0 +otherwise. +Meanwhile, P∆,∆′(1) possesses the following properties. +1) P∆,∆′(1) is independent of ∆ when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ max{1, ∆′}. +2) P∆,∆′(1) = P∆+δ,∆′+δ(1) when ∆′ ≥ tmax and ∆ > 0 for any δ ≥ 1. +3) P∆,∆′(1) = 0 when ∆′ > ∆ + tmax or when tmax − 1 < ∆′ < ∆ + 1. +Proof. The proof follows similar steps as presented in the proofs of Lemma 1. The complete +proof can be found in Appendix C. +As the expressions and properties of P∆,∆′(a) under both assumptions are clarified, we solve +for π∆ in the next subsection. +B. Stationary Distribution +Let ET be the expected transmission time of an update. Since the channel remains idle if no +transmission is initiated and the expected transmission time of an update is ET, π∆ satisfies the +following equation. +τ−1 +� +∆=0 +π∆ + ET +∞ +� +∆=τ +π∆ = 1, +(9) +where ET = �tmax +t=1 tpt under Assumption 1 and ET = �tmax +t=1 tpt +tmaxpt+ under Assumption +2. We notice that there is still infinitely many π∆ to calculate. To overcome the infinity, we +recall that, under threshold policy, the suggested action is a = 1 for all the state (∆, 0, −1) +with ∆ ≥ τ. Hence, we define Π ≜ �∞ +∆=ω π∆ where ω ≜ tmax + τ + 1. As we will see in the +following subsections, Π and π∆ for 0 ≤ ∆ < ω − 1 are sufficient for calculating the expected +AoII achieved by the threshold policy. With Π in mind, we have the following theorem. +January 18, 2023 +DRAFT + +15 +Theorem 1. For 0 < τ < ∞, Π and π∆ for 0 ≤ ∆ < ω − 1 are the solution to the following +system of linear equations. +π0 = (1 − p)π0 + p +τ−1 +� +i=1 +πi + P1,0(1) +�ω−1 +� +i=τ +πi + Π +� +. +π1 = pπ0 + P1,1(1) +�ω−1 +� +i=τ +πi + Π +� +. +Π = +ω−1 +� +i=τ+1 +� +i +� +k=τ+1 +Pi,tmax+k(1) +� +πi + +tmax +� +i=1 +� +Pω,ω+i(1) +� +Π. +τ−1 +� +i=0 +πi + ET +�ω−1 +� +i=τ +πi + Π +� += 1. +For each 2 ≤ ∆ ≤ tmax − 1, +π∆ = +� +� +� +� +� +� +� +� +� +� +� +� +� +(1 − p)π∆−1 + Pτ,∆(1) +�ω−1 +� +i=τ +πi + Π +� +∆ − 1 < τ, +∆−1 +� +i=τ +Pi,∆(1)πi + P∆,∆(1) +�ω−1 +� +i=∆ +πi + Π +� +∆ − 1 ≥ τ. +For each tmax ≤ ∆ ≤ ω − 1, +π∆ = +� +� +� +� +� +� +� +� +� +(1 − p)π∆−1 +∆ − 1 < τ, +∆−1 +� +i=τ +Pi,∆(1)πi +∆ − 1 ≥ τ. +Proof. We delve into the definition of Π. By leveraging the structural property of the threshold +policy and the properties of P∆,∆′(a), we obtain the above system of linear equations. The +complete proof can be found in Appendix D. +Remark 4. The size of the system of linear equations detailed in Theorem 1 is ω + 1. +Corollary 1. When τ = 0, +π0 = +P1,0(1) +ET[1 − P0,0(1) + P1,0(1)]. +π∆ = +∆−1 +� +i=0 +Pi,∆(1)πi + P∆,∆(1) +� +1 +ET − +∆−1 +� +i=0 +πi +� +, +1 ≤ ∆ ≤ tmax. +January 18, 2023 +DRAFT + +16 +Π = +tmax +� +i=1 +� +i +� +k=1 +Pi,tmax+k(1) +� +πi +1 − +tmax +� +i=1 +Ptmax+1,tmax+1+i(1) +. +When τ = 1, +π0 = +P1,0(1) +pET + P1,0(1), +π1 = pP1,0(1) + pP1,1(1) +pET + P1,0(1) +. +π∆ = +∆−1 +� +i=1 +Pi,∆(1)πi + P∆,∆(1) +� +1 − π0 +ET +− +∆−1 +� +i=1 +πi +� +, +2 ≤ ∆ ≤ tmax + 1. +Π = +tmax+1 +� +i=2 +� +i +� +k=2 +Pi,tmax+k(1) +� +πi +1 − +tmax +� +i=1 +Ptmax+2,tmax+2+i(1) +. +Proof. The calculations follow similar steps as detailed in the proof of Theorem 1. The complete +proof can be found in Appendix E. +We will calculate the expected AoII in the next subsection based on the above results. +C. Expected AoII +Let ¯∆τ be the expected AoII achieved by threshold policy τ. Then, +¯∆τ = +τ−1 +� +∆=0 +C(∆, 0)π∆ + +∞ +� +∆=τ +C(∆, 1)π∆, +(10) +where C(∆, a) is the expected sum of AoII during the transmission of the update caused by +the operation of a at state (∆, 0, −1). Note that C(∆, a) includes the AoII for being at state +(∆, 0, −1). +Remark 5. In order to have a more intuitive understanding of the definition of C(∆, a), we use +η to denote a possible path of the state during the transmission of the update and let H be the +set of all possible paths. Moreover, we denote by Cη and Pη the sum of AoII and the probability +associated with path η, respectively. Then, +C(∆, a) = +� +η∈H +PηCη. +January 18, 2023 +DRAFT + +17 +For example, we consider the case of p2 = 1, where the transmission takes 2 time slots to be +delivered. Also, action a = 1 is taken at state (2, 0, −1). Then, a sample path η of the state +during the transmission can be the following. +(2, 0, −1) → (3, 1, 1) → (4, 0, −1). +By our definition, Cη = 2 + 3 = 5 and Pη = Pr[(3, 1, 1) | (2, 0, −1), a = 1] · Pr[(4, 0, −1) | +(3, 1, 1), a = 1] for the above sample path. +In the following, we calculate C(∆, a). Similar to Section IV-A, we define Ct(∆, a) as the +expected sum of AoII during the transmission of the update caused by action a at state (∆, 0, −1), +given that the transmission takes t time slots. Then, under Assumption 1, +C(∆, a) = +� +� +� +� +� +� +� +∆ +a = 0, +tmax +� +t=1 +ptCt(∆, 1) +a = 1, +(11) +and, under Assumption 2, +C(∆, a) = +� +� +� +� +� +� +� +∆ +a = 0, +tmax +� +t=1 +ptCt(∆, 1) + pt+Ctmax(∆, 1) +a = 1. +(12) +Hence, obtaining the expressions of Ct(∆, 1) is sufficient. To this end, we define Ck(∆) as +the expected AoII k time slots after the transmission starts at state (∆, 0, −1), given that the +transmission is still in progress. Then, we have the following lemma. +Lemma 3. Ct(∆, 1) is given by +Ct(∆, 1) = +t−1 +� +k=0 +Ck(∆), +where +Ck(∆) = +� +� +� +� +� +� +� +� +� +� +� +� +� +k +� +h=1 +hp(k−h)p(1 − p)h−1 +∆ = 0, +k−1 +� +h=1 +h(1 − p(k−h))p(1 − p)h−1 + (∆ + k)(1 − p)k +∆ > 0. +Proof. The expression of Ck(∆) is obtained by analyzing the system dynamics. The complete +proof can be found in Appendix F. +January 18, 2023 +DRAFT + +18 +Next, we calculate the expected AoII achieved by the threshold policy. We start with the case +of τ = ∞. +Theorem 2. The expected AoII achieved by the threshold policy with τ = ∞ is +¯∆∞ = 1 +2p. +Proof. In this case, the transmitter will never initiate any transmissions. Hence, the state transi- +tions are straightforward. The complete proof can be found in Appendix G. +In the following, we focus on the case where τ is finite. We recall that the expected AoII is +given by (10). The problem arises because of the infinite sum. To overcome this, we adopt a +similar approach as proposed in Section IV-B. More precisely, we leverage the structural property +of the threshold policy and define Σ ≜ �∞ +∆=ω C(∆, 1)π∆. Then, equation (10) can be written as +¯∆τ = +τ−1 +� +i=0 +C(i, 0)πi + +ω−1 +� +i=τ +C(i, 1)πi + Σ. +As we have obtained the expressions of π∆ and C(∆, a) in previous subsections, it is sufficient +to obtain the expression of Σ. +Theorem 3. Under Assumption 1 and for 0 ≤ τ < ∞, +Σ = +tmax +� +t=1 +� +ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +C(i, 1)πi +� ++ ∆′ +tΠt +� +1 − +tmax +� +t=1 +� +ptP t +1,1+t(1) +� +, +where +Πt = ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +πi + Π +� +, +∆′ +t = +tmax +� +i=1 +pi +�t − t(1 − p)i +p +� +. +Under Assumption 2 and for 0 ≤ τ < ∞, +Σ = +tmax +� +t=1 +�� ω−1 +� +i=ω−t +Υ(i + t, t)C(i, 1)πi +� ++ ∆′ +tΠt +� +1 − +tmax +� +t=1 +Υ(ω + t, t) +, +January 18, 2023 +DRAFT + +19 +where +Υ(∆, t) = ptP t +∆−t,∆(1) + pt+P t+ +∆−t,∆(1), +Πt = +ω−1 +� +i=ω−t +Υ(i + t, t)πi + Υ(ω + t, t)Π, +∆′ +t = +tmax +� +i=1 +pi +�t − t(1 − p)i +p +� ++ pt+ +�t − t(1 − p)tmax +p +� +. +Proof. We delve into the definition of Σ and repeatedly use the properties of C(∆, a) and +P∆,∆′(a). The complete proof can be found in Appendix H. +V. OPTIMAL POLICY +In this section, we find the optimal policy for M. To this end, we first prove that the optimal +policy exists. +A. Existence of Optimal Policy +We first introduce the infinite horizon γ-discounted cost of M, where 0 < γ < 1 is a discount +factor. Then, the expected γ-discounted cost under policy φ is +Vφ,γ(s) = Eφ +� ∞ +� +t=0 +γtC(st) | s +� +, +(13) +where st is the state of M at time slot t. We define Vγ(s) ≜ infφVφ,γ(s) as the best that can be +achieved. Equivalently, Vγ(s) is the value function associated with the γ-discounted version of +M. Hence, Vγ(s) satisfies the corresponding Bellman equation. +Vγ(s) = min +a∈A +� +C(s) + γ +� +s′∈S +Ps,s′(a)Vγ(s′) +� +. +Value iteration algorithm is a canonical algorithm to calculate Vγ(s). Let Vγ,ν(s) be the estimated +value function at iteration ν. Then, the estimated value function is updated in the following way. +Vγ,ν+1(s) = min +a∈A +� +C(s) + γ +� +s′∈S +Ps,s′(a)Vγ,ν(s′) +� +. +(14) +Lemma 4. The estimated value function will converge to the value function as ν → ∞. More +precisely, limν→∞ Vγ,ν(s) = Vγ(s). +January 18, 2023 +DRAFT + +20 +Proof. According to [21, Propositions 1 and 3], it is sufficient to show that Vγ(s) is finite. To +this end, we consider the policy φ being the one that never initiate any transmissions. According +to (13), we have +Vφ,γ(s) = Eφ +� ∞ +� +t=0 +γtC(st) | s +� +≤ +∞ +� +t=0 +γt(∆ + t) = +∆ +1 − γ + +γ +(1 − γ)2 < ∞. +Then, by definition, we have Vγ(s) ≤ Vγ,φ(s) < ∞. Then, we can conclude that the value +iteration reported in (14) will converge to the value function. +Leveraging the convergence of the value iteration algorithm, we can prove the following +structural property of Vγ(s). +Lemma 5. Vγ(s) is increasing in ∆ when ∆ > 0. +Proof. We recall that Vγ(s) can be calculated using the value iteration algorithm. Hence, the +monotonicity of Vγ(s) can be proved via mathematical induction. The complete proof can be +found in Appendix I. +Now, we proceed with showing the existence of the optimal policy. To this end, we first define +the stationary policy. +Definition 3 (Stationary policy). A stationary policy specifies a single action in each time slot. +Theorem 4. There exists a stationary policy that is optimal for M. Moreover, the minimum +expected AoII is independent of the initial state. +Proof. We show that M verifies the two conditions given in [21]. Then, the results in the theorem +is guaranteed by [21, Theorem]. The complete proof can be found in Appendix J. +We denote by φ∗ the optimal policy for M. Then, the next problem is how to find φ∗. To +solve an MDP, the value iteration algorithm and the policy iteration algorithm are two of the +best-known algorithms. In the value iteration algorithm, the value function V (s) is computed +iteratively until convergence. However, since the state space S is infinite, it is not feasible to +compute the value function for all states. To make the calculation feasible, in Section V-B, an +approximation algorithm is applied to obtain an approximated optimal policy ˆφ∗, and ˆφ∗ is proved +to converge to φ∗. However, the choice of approximation parameters can significantly affect the +algorithm’s complexity and may even lead to a non-optimal policy. To avoid this problem, in +January 18, 2023 +DRAFT + +21 +Section V-C, we introduce the policy iteration algorithm and find φ∗ theoretically using the policy +improvement theorem. We start with the value iteration algorithm in the following subsection. +B. Value Iteration Algorithm +In the subsection, we present the relative value iteration (RVI) algorithm that approximates +φ∗. Direct application of RVI becomes impractical as the state space S is infinite. Hence, we use +approximating sequence method (ASM) [22]. To this end, we construct another MDP M(m) = +(S(m), A, P(m), C) by truncating the value of ∆. More precisely, we impose +S(m) : +� +� +� +� +� +� +� +� +� +� +� +∆ ∈ {0, 1, ..., m}, +i ∈ {−1, 0, 1}, +t ∈ {0, 1, ..., tmax − 1}, +where m is the predetermined maximal value of ∆. The transition probabilities from s ∈ S(m) +to z ∈ S \ S(m) are redistributed to the states s′ ∈ S(m) in the following way. +P (m) +s,s′ (a) = +� +� +� +� +� +� +� +Ps,s′(a) +s′ = (∆′, t′, i′) where ∆′ < m, +Ps,s′(a) + +� +G(z,s′) +Ps,z(a) +s′ = (∆′, t′, i′) where ∆′ = m, +where G(z, s′) = {z = (∆, t, i) : ∆ > m, t = t′, i = i′}. The action space A and the instant cost +C are the same as defined in M. +Theorem 5. The sequence of optimal policies for M(m) will converge to the optimal policy for +M as m → ∞. +Proof. The proof follows the same steps as those in the proof of [8, Theorem 1]. The complete +proof can be found in Appendix K. +Then, we can apply RVI to M(m) and treat the resulting policy as an approximation of φ∗. +The pseudocode of RVI is given in Algorithm 1. However, the choice of the approximation +parameter m is crucial. A large m can add unnecessary computational complexity, while a small +m may lead to a non-optimal policy. Therefore, in the following subsections, we use the policy +iteration algorithm and the policy improvement theorem to find φ∗ theoretically. We start with +introducing the policy iteration algorithm. +January 18, 2023 +DRAFT + +22 +Algorithm 1 Relative Value Iteration +1: procedure RVI(M(m),ϵ) +2: +V0(s) ← 0 for s ∈ S(m); ν ← 0 +3: +Choose sref ∈ S(m) arbitrarily +4: +repeat +5: +for s ∈ S(m) do +6: +for a ∈ A do +7: +Hs,a ← C(s) + � +s′ P (m) +s,s′ (a)Vν(s′) +8: +Qν+1(s) ← mina{Hs,a} +9: +for s ∈ S(m) do +10: +Vν+1(s) ← Qν+1(s) − Qν+1(sref) +11: +ν ← ν + 1 +12: +until maxs{|Vν(s) − Vν−1(s)|} ≤ ϵ +13: +return ˆφ∗ ← argmina{Hs,a} +C. Policy Iteration Algorithm +The policy iteration algorithm is an iterative algorithm that iterates between the following +two steps until convergence, which happens when two consecutive iterations produce equivalent +policies. +1) The first step is policy evaluation. In this step, we calculate the value function V φ(·) and +the expected AoII θφ resulting from the adoption of some policy φ. More precisely, the +value function and the expected AoII are obtained by solving the following system of +linear equations. +V φ(s) + θφ = C(s) + +� +s′∈S +P φ +s,s′V φ(s′), +s ∈ S, +(15) +where P φ +s,s′ is the state transition probability from s to s′ when policy φ is adopted. Note +that (15) forms a underdetermined system. Hence, we can select any state s as a reference +state and set the corresponding value function as 0. In this way, we can obtain a unique +solution. +January 18, 2023 +DRAFT + +23 +2) The second step is policy improvement. In this step, we obtain a new policy φ′ by applying +the V φ(·) obtained in the first step to the Bellman equation. More precisely, the action +suggested by φ′ at state s is determined by +φ′(s) = argmin +a∈A +� +C(s) + +� +s′∈S +Ps,s′(a)V φ(s′) +� +. +The pseudocode for policy iteration algorithm is given in Algorithm 2. With policy iteration +Algorithm 2 Policy Iteration +1: procedure PI(M) +2: +Choose φ′(s) ∈ A arbitrarily for all s ∈ S +3: +repeat +4: +φ(s) ← φ′(s) for all s ∈ S +5: +(V φ(s), θφ) ← POLICYEVALUATION(M, φ(s)) +6: +φ′(s) ← POLICYIMPROVEMENT(M, V φ(s)) +7: +until φ′(s) = φ(s) for all s ∈ S +8: +return (φ∗, θ) ← (φ(s), θφ) +algorithm in mind, we can proceed with presenting the policy improvement theorem. +Theorem 6 (Policy improvement theorem). Suppose that we have obtained the value function +resulting from the operation of a policy A and that the policy improvement step has produced +a policy B. +• If B is different from A, θA ≥ θB. +• When policy improvement step converges (i.e., A and B are equivalent), the converged +policy is optimal. +Proof. The proof follows the steps presented in [23, pp. 42-43]. The complete proof can be +found in Appendix L. +Before finding φ∗, we first simplify the Bellman equation shown in (4) in the next subsection +to make the process of finding φ∗ more concise and straightforward. +January 18, 2023 +DRAFT + +24 +D. Simplifying the Bellman Equation +We note that state transitions are complex and intertwined. Consequently, the direct analysis +of the Bellman equation (4) is complicated. In the following, we will simplify the Bellman +equation. To this end, we leverage the fact that the action space depends on the state space. +More specifically, when the channel is busy (i.e., i ̸= −1), the feasible action is a = 0. Hence, +the transmitter’s actions at these states are fixed, which leads to the fact that for these states, the +minimum operators in (4) are avoided. Let S−1 ≜ {s = (∆, t, i) : i = −1} be the set of states +at which the channel is idle. Then, +V (s) + θ = min +a∈A +� +C(s) + +� +s′∈S +Ps,s′(a)V (s′) +� += C(s) + +� +s′∈S +Ps,s′(0)V (s′), +s ∈ S \ S−1. +Then, for each s ∈ S−1, by repeatedly replacing V (s), where s ∈ S \S−1, with its corresponding +Bellman equation, we can obtain the Bellman equation consists only V (s) where s ∈ S−1. We +know that s = (∆, 0, −1) for s ∈ S−1. Hence, we abbreviate V (∆, 0, −1) as V (∆). Then, we +obtain the following Bellman equation. +V (∆) + θ = min +a∈{0,1} +� +C(∆, a) − θ(a) + +� +∆′≥0 +P∆,∆′(a)V (∆′) +� +, +∆ ≥ 0, +(16) +where +θ(a) = +� +� +� +0 +a = 0, +(ET − 1)θ +a = 1. +Note that ET, P∆′,∆(a), and C(∆, a) are those defined and discussion in Section IV. Hence, it is +sufficient to use (16) instead of (4) to determine the optimal action at state (∆, 0, −1). Although +equation (16) may seem complicated at first glance, its advantages will be fully demonstrated +later in the following subsection. +E. Optimal Policy via Policy Iteration Algorithm +In this subsection, we find φ∗ theoretically. To this end, we first introduce two conditions that +are essential to the analysis later on. +Condition 1. The condition is the following. +¯∆1 ≤ min +� +¯∆0, 1 + (1 − p)σ +2 +� +, +January 18, 2023 +DRAFT + +25 +where, for Assumption 1, +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� +1 − +tmax +� +t=1 +ppt(1 − p)t−1 +, +and for Assumption 2, +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� ++ pt+ +�1 − (1 − p)tmax +p +� +1 − +�tmax +� +t=1 +ppt(1 − p)t−1 + pt+(1 − p)tmax +� +. +¯∆0 and ¯∆1 are the expected AoII resulting from the adoption of the threshold policy with τ = 0 +and τ = 1, respectively. +Theorem 7. Under Condition 1, the optimal policy for M is the threshold policy with τ = 1. +Proof. The value iteration algorithm detail in Section V-B provides us with a good guess on +the optimal policy. Then, we theoretically prove its optimality using the policy improvement +theorem. The general procedure for the optimality proof can be summarized as follows. +1) Policy Evaluation: We calculate the value function resulting from the adoption of the +threshold policy with τ = 1. +2) Policy Improvement: We apply the value functions obtained in the previous step to Bellman +equation and verify that the resulting policy remains the same. +Then, the policy improvement theorem tells us that the resulting policy is optimal. The complete +proof can be found in Appendix M. +Remark 6. Note that Condition 1 is a sufficient condition for the threshold policy with τ = 1 +to be optimal, but not a necessary condition. +Remark 7. When the system fails to satisfy Condition 1, we can use the value iteration algorithm +introduced in Section V-B to obtain a good estimate of φ∗. +VI. NUMERICAL RESULTS +In this section, we numerically verify Condition 1 and analyze the performance of the optimal +policy. +January 18, 2023 +DRAFT + +26 +A. Verification of Condition 1 +As the closed-form expressions of ¯∆0 and ¯∆1 are given in Section IV, the inequality in +Condition 1 is easy to verify. We numerically verify Condition 1 for the following systems. +• System adopts Assumption 1/Assumption 2 and Geometric transmission delay with success +probability ps. More precisely, pt = (1 − ps)t−1ps. +• System adopts Assumption 1 and the transmission delay follows the Zipf distribution with +constant a. More precisely, pt = +t−a +�tmax +i=1 +i−a, 1 ≤ t ≤ tmax. +• System adopts Assumption 1 and pt = 1 +2(1{t = 1} + 1{t = tmax}). +For each of the above systems, the parameters take the following values. +• 0.05 ≤ p ≤ 0.45 with step size being equal to 0.05. +• 2 ≤ tmax ≤ 15 with step size being equal to 1. +• 0 ≤ ps ≤ 0.95 with step size being equal to 0.05. +• 0 ≤ a ≤ 5 with step size being equal to 0.25. +The numerical results show that all the systems mentioned above satisfy Condition 1. Then, +we can conclude that the corresponding optimal policy is the threshold policy with τ = 1, the +performance of which is presented in the next subsection. +Remark 8. Zipf distribution reduces to Uniform distribution when a = 0, and Geometric +transmission delay reduces to deterministic transmission delay when ps = 0. We ignore the +case of p = 0 because the dynamic source does not change state in this case. Similarly, we are +not interested in the case of p = 0.5 because the state of the dynamic source is independent of +the previous state in this case. Also, we exclude the case of ps = 1 because, in this case, the +transmission time is deterministic and equal to 1 time slot. +B. Optimal Policy Performance +In this subsection, we analyze the performance of the optimal policy. To this end, we consider +the system where the transmission delay follows a Geometric distribution with success probability +ps. Moreover, we compare the performance of the optimal policy with that of the threshold +policies with τ = 0 and τ = ∞. All the results are calculated using Section IV. +a) The effect of p: In this case, we fix tmax = 5 and ps = 0.7. Then, we vary p and plot +the corresponding results in Fig. 3. In the figure, to better show the performance of the optimal +policy, we only show parts of the results for the threshold policy with τ = ∞. We notice that, +January 18, 2023 +DRAFT + +27 +0.05 +0.1 +0.15 +0.2 +0.25 +0.3 +0.35 +0.4 +0.45 +p +0 +0.2 +0.4 +0.6 +0.8 +1 +1.2 +1.4 +1.6 +1.8 +2 +Expected AoII + = 0 + = 1 + = +(a) Performance under Assumption 1. +0.05 +0.1 +0.15 +0.2 +0.25 +0.3 +0.35 +0.4 +0.45 +p +0 +0.2 +0.4 +0.6 +0.8 +1 +1.2 +1.4 +1.6 +1.8 +2 +Expected AoII + = 0 + = 1 + = +(b) Performance under Assumption 2. +Fig. 3: Illustrations of the expected AoII in the function of p and τ. We set the upper limit on +the transmission time tmax = 5 and the success probability in Geometric distribution ps = 0.7. +as p increases, the expected AoIIs achieved by the threshold policies with τ = 0 and τ = 1 +increase. This is because when p is large, the dynamic sources will be inclined to switch between +states. Therefore, the state of the dynamic source is more unpredictable, leading to an increase +in the achieved expected AoIIs. Meanwhile, the expected AoII achieved by the threshold policy +with τ = ∞ decreases as p increases. To explain this, we first recall that, under the threshold +policy with τ = ∞, the receiver’s estimate will not change. Also, when p is large, the dynamic +source will switch states frequently. Therefore, the probability of a situation where the receiver’s +estimate is always incorrect is small, which makes the resulting AoII small. Also, we notice +that Assumption 1 and Assumption 2 lead to almost the same performance. To explain this, +we first note that the only difference between Assumption 1 and Assumption 2 is whether +the update is delivered or discarded when the transmission lasts to the tmaxth time slot after +the transmission starts. However, under our choices of ps and tmax, the transmission time of an +update rarely reaches tmax time slots. Even if it reaches tmax time slots, delivery or discard does +not significantly impact the performance, as the receiver’s estimate can be correct or incorrect +regardless of whether the update is delivered. Therefore, Assumption 1 and Assumption 2 yield +almost the same performance. +b) The effect of ps: In this case, we fix tmax = 5 and p = 0.35. Then, we vary ps and plot +the corresponding results in Fig. 4. The figure shows that the expected AoIIs achieved by the +January 18, 2023 +DRAFT + +28 +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1 +pS +0.5 +0.6 +0.7 +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +1.5 +Expected AoII + = 0 + = 1 + = +(a) Performance under Assumption 1. +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1 +pS +0.5 +0.6 +0.7 +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +1.5 +Expected AoII + = 0 + = 1 + = +(b) Performance under Assumption 2. +Fig. 4: Illustrations of the expected AoII in the function of ps and τ. We set the upper limit on +the transmission time tmax = 5 and the source dynamic p = 0.35. +threshold policies with τ = 0 and τ = 1 decrease as ps increases. The reason behind this is as +follows. As ps increases, the expected transmission time of an update decreases, meaning that +updates are more likely to be delivered within the first few time slots. As a result, the receiver +receives fresher information, and thus the expected AoII decreases. Moreover, the performance +gap between the threshold policies with τ = 1 and τ = 0 is small when ps is large. To explain +this, we notice that the gap exists because the updates transmitted when AoII is zero do not +provide new information to the receiver. Meanwhile, the transmission will occupy the channel +for a few time slots. Therefore, such action deprives the transmitter of the ability to send new +updates for the next few time slots without providing the receiver with any new information. +Hence, when ps is large, the expected transmission time of the update is small. Consequently, +the transmission when AoII is zero becomes less costly. Hence, the gap narrows. +c) The effect of tmax: In this case, we fix ps = 0.7 and p = 0.35. Then, we vary tmax and +plot the corresponding results in Fig. 5. From the figure, we can see that the effect of tmax on the +performances of the policies is only noticeable when tmax is small. This is because, under our +choice of ps, most updates will be delivered within the first few time slots. Therefore, increasing +tmax will not significantly affect the performance. +January 18, 2023 +DRAFT + +29 +2 +4 +6 +8 +10 +12 +14 +16 +tmax +0.6 +0.7 +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +1.5 +Expected AoII + = 0 + = 1 + = +(a) Performance under Assumption 1. +2 +4 +6 +8 +10 +12 +14 +16 +tmax +0.6 +0.7 +0.8 +0.9 +1 +1.1 +1.2 +1.3 +1.4 +1.5 +Expected AoII + = 0 + = 1 + = +(b) Performance under Assumption 1. +Fig. 5: Illustrations of the expected AoII in the function of tmax and τ. We set the success +probability in Geometric distribution ps = 0.7 and the source dynamic p = 0.35. +VII. CONCLUSION +In this paper, we investigate the problem of minimizing the Age of Incorrect Information over +a channel with random delay. We study a slotted-time system where a transmitter observes a +dynamic source and sends updates to a remote receiver through a channel with random delay. +To facilitate the analysis, we consider two cases. The first case assumes that the transmission +time has an upper bound and that the update will always be delivered. The second case assumes +that the system automatically discards updates when transmission lasts too long. We aim to +find when the transmitter should initiate transmission to minimize the AoII. To this end, we +first characterize the optimization problem using the Markov decision process and calculate the +expected AoII achieved by the threshold policy precisely using the Markov chain. Next, we prove +that the optimal policy exists, and the relative value iteration algorithm is provided to estimate +the optimal policy. Then, with the help of the policy improvement theorem, we prove that, under +Condition 1, the optimal policy is the threshold policy with τ = 1. Finally, we numerically verify +Condition 1 for various system parameters and analyze the performance of the optimal policy. +REFERENCES +[1] E. Uysal, O. Kaya, A. Ephremides, J. Gross, M. Codreanu, P. Popovski, M. Assaad, G. Liva, A. Munari, B. Soret, +T. Soleymani, and K. H. Johansson, “Semantic communications in networked systems: A data significance perspective,” +IEEE Network, vol. 36, no. 4, pp. 233–240, 2022. +January 18, 2023 +DRAFT + +30 +[2] R. D. Yates, Y. Sun, D. R. Brown, S. K. Kaul, E. Modiano, and S. Ulukus, “Age of information: An introduction and +survey,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 5, pp. 1183–1210, 2021. +[3] Y. Sun, I. Kadota, R. Talak, and E. Modiano, “Age of information: A new metric for information freshness,” Synthesis +Lectures on Communication Networks, vol. 12, no. 2, pp. 1–224, 2019. +[4] A. Kosta, N. Pappas, V. Angelakis et al., “Age of information: A new concept, metric, and tool,” Foundations and Trends® +in Networking, vol. 12, no. 3, pp. 162–259, 2017. +[5] N. Pappas, M. A. Abd-Elmagid, B. Zhou, W. Saad, and H. S. Dhillon, Age of Information: Foundations and Applications. +Cambridge University Press, 2023. +[6] A. Maatouk, S. Kriouile, M. Assaad, and A. Ephremides, “The age of incorrect information: A new performance metric +for status updates,” IEEE/ACM Transactions on Networking, vol. 28, no. 5, pp. 2215–2228, 2020. +[7] A. Maatouk, M. Assaad, and A. Ephremides, “The age of incorrect information: an enabler of semantics-empowered +communication,” IEEE Transactions on Wireless Communications, pp. 1–1, 2022. +[8] Y. Chen and A. Ephremides, “Minimizing age of incorrect information for unreliable channel with power constraint,” in +2021 IEEE Global Communications Conference (GLOBECOM). +IEEE, 2021, pp. 1–6. +[9] ——, “Scheduling to minimize age of incorrect information with imperfect channel state information,” Entropy, vol. 23, +no. 12, p. 1572, 2021. +[10] S. Kriouile and M. Assaad, “Minimizing the age of incorrect information for real-time tracking of markov remote sources,” +in 2021 IEEE International Symposium on Information Theory (ISIT). +IEEE, 2021, pp. 2978–2983. +[11] ——, “Minimizing the age of incorrect information for unknown markovian source,” arXiv preprint arXiv:2210.09681, +2022. +[12] S. Saha, H. Singh Makkar, V. Bala Sukumaran, and C. R. Murthy, “On the relationship between mean absolute error and +age of incorrect information in the estimation of a piecewise linear signal over noisy channels,” IEEE Communications +Letters, vol. 26, no. 11, pp. 2576–2580, 2022. +[13] B. Joshi, R. V. Bhat, B. Bharath, and R. Vaze, “Minimization of age of incorrect estimates of autoregressive markov +processes,” in 2021 19th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks +(WiOpt). +IEEE, 2021, pp. 1–8. +[14] C. Kam, S. Kompella, and A. Ephremides, “Age of incorrect information for remote estimation of a binary markov source,” +in IEEE INFOCOM 2020-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). +IEEE, +2020, pp. 1–6. +[15] Y. Sun, Y. Polyanskiy, and E. Uysal-Biyikoglu, “Remote estimation of the wiener process over a channel with random +delay,” in 2017 IEEE International Symposium on Information Theory (ISIT). +IEEE, 2017, pp. 321–325. +[16] Y. Sun, Y. Polyanskiy, and E. Uysal, “Sampling of the wiener process for remote estimation over a channel with random +delay,” IEEE Transactions on Information Theory, vol. 66, no. 2, pp. 1118–1135, 2019. +[17] T. Z. Ornee and Y. Sun, “Sampling for remote estimation through queues: Age of information and beyond,” in 2019 +International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT). +IEEE, +2019, pp. 1–8. +[18] C. Kam, S. Kompella, G. D. Nguyen, J. E. Wieselthier, and A. Ephremides, “Towards an effective age of information: +Remote estimation of a markov source,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications +Workshops (INFOCOM WKSHPS). +IEEE, 2018, pp. 367–372. +[19] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. +MIT press, 2018. +[20] J. Postel, “Internet protocol,” Tech. Rep., 1981. +January 18, 2023 +DRAFT + +31 +[21] L. I. Sennott, “Average cost optimal stationary policies in infinite state markov decision processes with unbounded costs,” +Operations Research, vol. 37, no. 4, pp. 626–633, 1989. +[22] ——, “On computing average cost optimal policies with application to routing to parallel queues,” Mathematical methods +of operations research, vol. 45, no. 1, pp. 45–62, 1997. +[23] R. A. Howard, “Dynamic programming and markov processes.” 1960. +APPENDIX A +DETAILS OF STATE TRANSITION PROBABILITY +We first elaborate on the individual transition of ∆ by dividing the transition into the following +cases. +• ∆ = 0 and the receiver’s estimates are the same at state s and s′. In this case, ∆′ = 0 when +the dynamic source remains in the same state. Otherwise, ∆′ = 1. +∆′ = +� +� +� +0 +w.p. 1 − p, +1 +w.p. p. +• ∆ = 0 and the receiver’s estimates are different at state s and s′. In this case, ∆′ = 0 when +the dynamic source flips the state. Otherwise, ∆′ = 1. +∆′ = +� +� +� +� +� +0 +w.p. p, +1 +w.p. 1 − p. +• ∆ > 0 and the receiver’s estimates are the same at state s and s′. In this case, ∆′ = ∆ + 1 +when the dynamic source remains in the same state. Otherwise, ∆′ = 0. +∆′ = +� +� +� +� +� +0 +w.p. p, +∆′ + 1 +w.p. 1 − p. +• ∆ > 0 and the receiver’s estimates are different at state s and s′. In this case, ∆′ = ∆ + 1 +when the dynamic source flips the state. Otherwise, ∆′ = 0. +∆′ = +� +� +� +� +� +0 +w.p. 1 − p, +∆ + 1 +w.p. p. +Hence, in the following, we only state whether the receiver’s estimates are the same at state s +and s′ and omit the rest of the discussion on the transition of ∆. To make the notation clearer, +we write Ps,s′(a) as P[(∆′, i′, t′) | (∆, t, i), a] in this proof. Then, we distinguish between the +following cases. +January 18, 2023 +DRAFT + +32 +• s = (0, 0, −1). In this case, the channel is idle. Hence, the feasible action is a ∈ {0, 1}. +When the transmitter decides not to initiate a new transmission (i.e., a = 0), i′ = 0 and +t′ = −1. Moreover, the receiver’s estimate remains the same. Hence, +Pr[(0, 0, −1) | (0, 0, −1), a = 0] = 1 − p. +Pr[(1, 0, −1) | (0, 0, −1), a = 0] = p. +When the transmitter decides to initiate a new transmission (i.e., a = 1), the update will be +delivered after a random amount of time T. When T > 1, which happens with probability +Pr(T > 1 | 0), the channel will be busy at the next time slot and t′ = 1 as the transmission +starts. Since the transmission happens when ∆ = 0, we know i′ = 0. Moreover, the receiver’s +estimate remains the same since no new update will be delivered. Hence, +Pr[(0, 1, 0) | (0, 0, −1), a = 1] = Pr(T > 1 | 0)(1 − p). +Pr[(1, 1, 0) | (0, 0, −1), a = 1] = Pr(T > 1 | 0)p. +When T = 1, which happens with probability 1 − Pr(T > 1 | 0), the update will be +delivered at the next time slot. Hence, the channel will be available for a new transmission +at the next time slot, which means that t′ = 0 and i′ = −1. Since the transmission started +when ∆ = 0, the newly arrived update will bring no new information to the receiver. Hence, +the receiver’s estimate remains the same. Hence, +Pr[(0, 0, −1) | (0, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))(1 − p). +Pr[(1, 0, −1) | (0, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))p. +• s = (0, t, 0). In this case, the channel is busy. Hence, the feasible action is a = 0. When the +update will not arrive at the next time slot, which happens with probability Pr(T > t+1 | t), +i′ = i since both the transmitting update and the receiver’s estimate remain the same. +Apparently, t′ = t + 1 as the transmission continues. Moreover, the receiver’s estimate +remains the same. Hence, +Pr[(0, t + 1, 0) | (0, t, 0)] = Pr(T > t + 1 | t)(1 − p). +Pr[(1, t + 1, 0) | (0, t, 0)] = Pr(T > t + 1 | t)p. +When the update arrives at the next time slot, which happens with probability 1 − Pr(T > +t + 1 | t), t′ = 0 and i′ = −1 by definition. Since i = 0, the newly arrived update will +January 18, 2023 +DRAFT + +33 +bring no new information to the receiver. Hence, the receiver’s estimate remains the same. +Hence, +Pr[(0, 0, −1) | (0, t, 0)] = (1 − Pr(T > t + 1 | t))(1 − p). +Pr[(1, 0, −1) | (0, t, 0)] = (1 − Pr(T > t + 1 | t))p. +• s = (0, t, 1). The analysis is very similar to the case of s = (0, t, 0) except that when the +update arrives, the receiver’s estimate will flip. Hence, +Pr[(0, t + 1, 1) | (0, t, 1)] = Pr(T > t + 1 | t)(1 − p). +Pr[(1, t + 1, 1) | (0, t, 1)] = Pr(T > t + 1 | t)p. +Pr[(0, 0, −1) | (0, t, 1)] = (1 − Pr(T > t + 1 | t))p. +Pr[(1, 0, −1) | (0, t, 1)] = (1 − Pr(T > t + 1 | t))(1 − p). +• s = (∆, 0, −1) where ∆ > 0. In this case, the analysis is very similar to the case of +s = (0, 0, −1) except that, the receiver’s estimate is incorrect at state s and if the decision is +made to transmit, the transmitted update is different from the receiver’s estimate. Therefore, +the details are omitted here. Hence, +Pr[(∆ + 1, 0, −1) | (∆, 0, −1), a = 0] = 1 − p. +Pr[(0, 0, −1) | (∆, 0, −1), a = 0] = p. +Pr[(∆ + 1, 1, 1) | (∆, 0, −1), a = 1] = Pr(T > 1 | 0)(1 − p). +Pr[(0, 1, 1) | (∆, 0, −1), a = 1] = Pr(T > 1 | 0)p. +Pr[(∆ + 1, 0, −1) | (∆, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))p. +Pr[(0, 0, −1) | (∆, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))(1 − p). +• s = (∆, t, 0) where ∆ > 0. The analysis is very similar to the case of s = (0, t, 0) except +that the receiver’s estimate is incorrect at state s. Hence, +Pr[(∆ + 1, t + 1, 0) | (∆, t, 0)] = Pr(T > t + 1 | t)(1 − p). +Pr[(0, t + 1, 0) | (∆, t, 0)] = Pr(T > t + 1 | t)p. +Pr[(∆ + 1, 0, −1) | (∆, t, 0)] = (1 − Pr(T > t + 1 | t))(1 − p). +January 18, 2023 +DRAFT + +34 +Pr[(0, 0, −1) | (∆, t, 0)] = (1 − Pr(T > t + 1 | t))p. +• s = (∆, t, 1) where ∆ > 0. The analysis is very similar to the case of s = (∆, t, 0) except +that the transmitted update differs from the receiver’s estimate. Hence, +Pr[(∆ + 1, t + 1, 1) | (∆, t, 1)] = Pr(T > t + 1 | t)(1 − p). +Pr[(0, t + 1, 1) | (∆, t, 1)] = Pr(T > t + 1 | t)p. +Pr[(∆ + 1, 0, −1) | (∆, t, 1)] = (1 − Pr(T > t + 1 | t))p. +Pr[(0, 0, −1) | (∆, t, 1)] = (1 − Pr(T > t + 1 | t))(1 − p). +Combing the above cases, we fully characterized the state transition probability. +Remark 9. Note that the transitions that are not discussed above happen with probability zero. +APPENDIX B +PROOF OF LEMMA 1 +We recall that P t +∆,∆′(1) is the probability that action a at state s = (∆, 0, −1) will lead to +state s′ = (∆′, 0, −1), given that the transmission takes t time slots. With this in mind, we first +distinguish between different values of ∆. +• When ∆ = 0, the transmitted update is the same as the receiver’s estimate. Hence, the +receiver’s estimate will not change due to receiving the transmitted update. Moreover, we +recall that AoII will either increases by one or decreases to zero. Hence, ∆′ ∈ {0, 1, ..., t}. +Then, we further distinguish our discussion into the following cases. +– ∆′ = 0 happens when the receiver’s estimate is correct as a result of receiving the +update. Hence, the probability of this happening is p(t). +– ∆′ = k ∈ {1, ..., t} happens when the receiver’s estimate is correct at (t−k)th time slot +after the transmission, which happens with probability p(t−k). Then, the estimate remains +incorrect for the remainder of the transmission time. This happens when the source first +changes state, then remains in the same state throughout the rest of the transmission. +Hence, the probability of this happening is p(1 − p)k−1. Combining together, ∆′ = k +happens with probability p(t−k)p(1 − p)k−1. +January 18, 2023 +DRAFT + +35 +Combining together, we have +P t +0,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +p(t−k)p(1 − p)k−1 +1 ≤ ∆′ = k ≤ t, +0 +otherwise. +• When ∆ > 0, the transmitted update is different from the receiver’s estimate. Hence, the +receiver’s estimate will flip as a result of receiving the transmitted update. Moreover, we +know ∆′ ∈ {0, 1, ..., t − 1, ∆ + t}. Hence, we further distinguish between the following +cases. +– ∆′ = 0 happens in the same case as discussed in the case of ∆ = 0. Hence, the estimate +is correct with probability p(t). +– ∆′ = 1 happens when the estimate is correct at (t−1)th time slot after the transmission, +which happens with probability 1 − p(t−1). Then, the estimate becomes incorrect as a +result of receiving the update. Since the estimate flips upon the arrival of the transmitted +update, it happens when the source remains in the same state. Hence, the probability +of this happening is 1 − p. Combing together, ∆′ = 1 happens with probability (1 − +p(t−1))(1 − p). +– ∆′ = k ∈ {2, ..., t − 1} happens when the estimate is correct at (t − k)th time slot +after the transmission, which happens with probability 1 − p(t−k). Then, the estimate +remains incorrect for the remainder of the transmission time. This happens when the +dynamic source behaves the following way during the remaining transmission time. The +dynamic source should first change state, then remain in the same state, and finally, +change state again when the update arrives. This happens with probability p2(1−p)k−2. +Hence, ∆′ = k happens with probability (1 − p(t−k))p2(1 − p)k−2. +– ∆′ = ∆ + t happens when the estimate is incorrect throughout the transmission. Since +the estimate will flip when the update is received, this happens when the source stays +in the same state until the update arrives. Hence, ∆′ = ∆ + t happens with probability +p(1 − p)t−1. +January 18, 2023 +DRAFT + +36 +Combining together, for ∆ > 0, we have +P t +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +(1 − p(t−1))(1 − p) +∆′ = 1, +(1 − p(t−k))p2(1 − p)k−2 +2 ≤ ∆′ = k ≤ t − 1, +p(1 − p)t−1 +∆′ = ∆ + t, +0 +otherwise. +By analyzing the above expressions, we can easily conclude that P t +∆,∆′(1) possesses the following +properties. +• P t +∆,0(1) and P t +∆,∆+t(1) are both independent of ∆. +• P t +∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ t − 1. +• P t +∆,∆′(1) = 0 when ∆′ > ∆ + t or when t − 1 < ∆′ < ∆ + t. +Leveraging the above properties, we can prove the second part of the lemma. The equivalent +expression can be obtained easily, so the details are omitted. In the following, we focus on +proving the properties of P∆,∆′(a). +• property 1: When ∆′ = 0, P∆,0(1) = �tmax +t=1 ptP t +∆,0(1) for any ∆ ≥ 0. Since P t +∆,0(1) is +independent of ∆, property 1 holds in this case. Then, we consider the case of 1 ≤ ∆′ ≤ +tmax − 1 and ∆ ≥ ∆′. In this case, +P∆,∆′(1) = +tmax +� +t=∆′ +ptP t +∆,∆′(1), +where P t +∆,∆′(1) is independent of ∆. Hence, P∆,∆′(1) is independent of ∆. Combining +together, property 1 holds. +• property 2: We notice that, when ∆′ ≥ tmax, +P∆,∆′(1) = pt′P t′ +∆,∆′(1) = pt′P t′ +∆,∆+t′(1). +We recall that P t′ +∆,∆+t′(1) is independent of ∆. Then, we can conclude that P∆,∆′(1) depends +only on t′. Thus, property 2 holds. +• property 3: The equivalent expression in corollary indicates that the property holds when +∆′ > ∆ + tmax. In the case of tmax − 1 < ∆′ < ∆ + 1, we have +P∆,∆′(1) = pt′P t′ +∆,∆′(1), +where t′ ≤ 0. By definition, P∆,∆′(1) = 0. Hence, property 3 holds. +January 18, 2023 +DRAFT + +37 +APPENDIX C +PROOF OF LEMMA 2 +The proof is similar to that of Lemma 1. We first derive the expressions of P t +∆,∆′(1) and +P t+ +∆,∆′(1). To this end, we start with the case of ∆ = 0. In this case, the transmitted update is +the same as the receiver’s estimate. With this in mind, we distinguish between different values +of t. +• When 1 ≤ t < tmax, the update is delivered after t time slot. Hence, ∆′ ∈ {0, 1, ..., t}. +Then, we further distinguish between different values of ∆′. +– ∆′ = 0 in the case where the receiver’s estimate is correct when the update is delivered. +Hence, ∆′ = 0 happens with probability p(t). +– ∆′ = k ∈ {1, 2, ..., t} when the receiver’s estimate is correct at the (t − k)th time slots +after the transmission occurs. Then, the source flips the state and remains in the same +state for the remainder of the transmission. Hence, ∆′ = k ∈ {1, 2, ..., t} happens with +probability p(t−k)p(1 − p)k−1. +• When t = tmax, the update either arrives or be discarded. In this case, ∆′ ∈ {0, 1, ..., tmax}. +We recall that the update is the same as the receiver’s estimate. Hence, the receiver’s estimate +will not change in both cases. Consequently, P tmax +0,∆′ (1) = P t+ +0,∆′(1), which can be obtained +by setting the t in the above case to tmax. +Combining together, for each 1 ≤ t ≤ tmax, +P t +0,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +p(t−k)p(1 − p)k−1 +1 ≤ ∆′ = k ≤ t, +0 +otherwise. +P t+ +0,∆′(1) = P tmax +0,∆′ (1). +Then, we consider the case of ∆ > 0. We notice that, in this case, the receiver’s estimate will +flip upon receiving the update. Then, we distinguish between different values of t. +• When 1 ≤ t < tmax, the update is delivered after t time slots, and the receiver’s estimate +will flip. Hence, ∆′ ∈ {0, 1, ..., t−1, ∆+t}. Then, we further distinguish between different +values of ∆′. +– ∆′ = 0 in the case where the receiver’s estimate is correct when the update is received. +Hence, ∆′ = 0 happens with probability p(t). +January 18, 2023 +DRAFT + +38 +– ∆′ = 1 when the receiver’s estimate is correct at (t−1)th time slot after the transmission +starts and becomes incorrect when the update arrives. Hence, ∆′ = 1 happens with +probability (1 − p(t−1))(1 − p). +– ∆′ = k ∈ {2, 3, ..., t − 1} when the receiver’s estimate is correct at (t − k)th time slot +after the transmission starts. Then, the source changes state and remains in the same +state. Finally, at the time slot when the update arrives, the source flips state again. +Hence, ∆′ = k ∈ {2, 3, ..., t − 1} happens with probability (1 − p(t−k))p2(1 − p)k−2. +– ∆′ = ∆ + t when the estimate is incorrect throughout the transmission. We recall that +the receiver’s estimate will flip when the update arrives. Hence, ∆′ = ∆ + t when the +source remains in the same state until the update arrives, which happens with probability +p(1 − p)t−1. +• When t = tmax and the transmitted update is delivered, the receiver’s estimate flips. In this +case, ∆′ ∈ {0, 1, ..., tmax − 1, ∆ + tmax}. Hence, P tmax +∆,∆′(1) can be obtained by setting the t +in the above case to tmax. +• When t = tmax and the transmitted update is discarded, the receiver’s estimate remains the +same. In this case, ∆′ ∈ {0, 1, ..., tmax−1, ∆+tmax}. Then, we further divide our discussion +into the following cases. +– ∆′ = 0 when the receiver’s estimate is correct at the tmaxthe time slot after the +transmission starts, which happens when the state of the source at the time slot the +update is discarded is different from that when the transmission started. Hence, ∆′ = 0 +happens with probability 1 − p(tmax). +– ∆′ = k ∈ {1, 2, ..., tmax − 1} when the receiver’s estimate is correct at (tmax − k)th +time slot after the transmission starts. Then, the source changes state and remains in the +same state for the remainder of the transmission. Hence, ∆′ = k ∈ {1, 2, ..., tmax − 1} +happens with probability (1 − p(tmax−k))p(1 − p)k−1. +– ∆′ = ∆+tmax when the source remains in the same state throughout the transmission. +Combining with the source dynamic, we can conclude that ∆′ = ∆ + tmax happens +with probability (1 − p)tmax. +January 18, 2023 +DRAFT + +39 +Combining together, for ∆ > 0 and each 1 ≤ t ≤ tmax, +P t +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +p(t) +∆′ = 0, +(1 − p(t−1))(1 − p) +∆′ = 1, +(1 − p(t−k))p2(1 − p)k−2 +2 ≤ ∆′ = k ≤ t − 1, +p(1 − p)t−1 +∆′ = ∆ + t, +0 +otherwise. +P t+ +∆,∆′(1) = +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +� +1 − p(tmax) +∆′ = 0, +(1 − p(tmax−k))p(1 − p)k−1 +1 ≤ ∆′ = k ≤ tmax − 1, +(1 − p)tmax +∆′ = ∆ + tmax, +0 +otherwise. +By analyzing the above expressions, we can easily conclude that P t +∆,∆′(1) and P t+ +∆,∆′(1) possess +the following properties. +• P t +∆,∆+t(1) and P t+ +∆,∆+tmax(1) are independent of ∆ when ∆ > 0. +• P t +∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ t − 1. +• P t +∆,∆′(1) = 0 when ∆ > 0 and t − 1 < ∆′ < ∆ + t. +• P t+ +∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ tmax − 1. +• P t+ +∆,∆′(1) = 0 when ∆ > 0 and tmax − 1 < ∆′ < ∆ + tmax. +Leveraging the properties above, we proceed with proving the second part of the lemma. The +equivalent expression can be obtained easily by analyzing (8). Hence, the details are omitted. In +the following, we focus on proving the presented properties. +• property 1: We notice that, when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ max{1, ∆′}, +P∆,∆′(1) = +tmax +� +t=∆′ +ptP t +∆,∆′(1) + pt+P t+ +∆,∆′(1). +Then, we divide the discussion into the following two cases. +– ∆ ≥ max{1, ∆′} indicates that ∆ > 0 and ∆′ < ∆ + tmax. Hence, P t+ +∆,∆′(1) is +independent of ∆. +– ∆ ≥ max{1, ∆′} indicates that ∆ > 0 and ∆′ < ∆+t. Hence, P t +∆,∆′(1) is independent +of ∆ for any feasible t. +Combining together, we can conclude that property 1 holds. +January 18, 2023 +DRAFT + +40 +• property 2: We notice that, when ∆′ ≥ tmax, +P∆,∆′(1) = pt′P t′ +∆,∆′(1) + pt+P t+ +∆,∆′(1). +Then, we divide the discussion into the following two cases. +– Since t′ = ∆′−∆, P t′ +∆,∆′(1) = P t′ +∆,∆+t′(1). Then, we know that P t′ +∆,∆′(1) is independent +of ∆ > 0 when t′ > 0 and P t′ +∆,∆′(1) = 0 when t′ ≤ 0 by definition. Hence, P t′ +∆,∆′(1) +depends on t′. +– When ∆′ ≥ tmax and ∆′ ̸= ∆ + tmax, P t+ +∆,∆′(1) = 0 for ∆ > 0. Also, P t+ +∆,∆′(1) is +independent of ∆ > 0 when ∆′ = ∆ + tmax. Hence, P t+ +∆,∆′(1) depends only on t′. +Combining together, property 2 holds. +• property 3: When ∆′ > ∆ + tmax, the property holds apparently. When tmax − 1 < ∆′ < +∆ + 1, +P∆,∆′(1) = pt′P t′ +∆,∆′(1) + pt+P t+ +∆,∆′(1), +where t′ ≤ 0. Then, by definition, P t′ +∆,∆′(1) = 0. Moreover, we recall that tmax > 1, which +indicates that P t+ +∆,∆′(1) = 0. Hence, property 3 holds. +APPENDIX D +PROOF OF THEOREM 1 +We recall that π∆ satisfies (6) and (9). Then, plugging in the probabilities yields the following +system of linear equations. +π0 =(1 − p)π0 + p +τ−1 +� +i=1 +πi + +∞ +� +i=τ +Pi,0(1)πi +=(1 − p)π0 + p +τ−1 +� +i=1 +πi + P1,0(1) +∞ +� +i=τ +πi. +(17) +π1 = pπ0 + +∞ +� +i=τ +Pi,1(1)πi = pπ0 + P1,1(1) +∞ +� +i=τ +πi. +(18) +For each 2 ≤ ∆ ≤ tmax − 1, +π∆ = +� +� +� +� +� +� +� +� +� +� +� +(1 − p)π∆−1 + Pτ,∆(1) +∞ +� +i=τ +πi +∆ − 1 < τ, +∆−1 +� +i=τ +Pi,∆(1)πi + P∆,∆(1) +∞ +� +i=∆ +πi +∆ − 1 ≥ τ. +(19) +January 18, 2023 +DRAFT + +41 +For each tmax ≤ ∆ ≤ ω − 1, +π∆ = +� +� +� +� +� +� +� +� +� +(1 − p)π∆−1 +∆ − 1 < τ, +∆−1 +� +i=τ +Pi,∆(1)πi +∆ − 1 ≥ τ. +For each ∆ ≥ ω, +π∆ = +∆−1 +� +i=∆−tmax +Pi,∆(1)πi. +(20) +τ−1 +� +i=0 +πi + ET +∞ +� +i=τ +πi = 1. +Note that we can pull the state transition probabilities in (17), (18), and (19) out of the summation +due to property 1 in Lemma 1 and Lemma 2. Then, we sum (20) over ∆ from ω to ∞. +∞ +� +i=ω +πi = +∞ +� +i=ω +i−1 +� +k=i−tmax +Pk,i(1)πk. +(21) +We delve deep into the right hand side (RHS) of (21). To this end, we expand the first summation, +which yields +RHS = +ω−1 +� +k=τ+1 +Pk,ω(1)πk + +ω +� +k=τ+2 +Pk,ω+1(1)πk + · · · + +ω+tmax−2 +� +k=ω−1 +Pk,ω+tmax−1(1)πk+ +ω+tmax−1 +� +k=ω +Pk,ω+tmax(1)πk + · · · +Then, we rearrange the summation. +RHS =Pτ+1,ω(1)πτ+1 + +2 +� +k=1 +Pτ+2,ω+k−1(1)πτ+2 + · · · + +tmax +� +k=1 +Pω−1,ω+k−1(1)πω−1+ +tmax +� +k=1 +Pω,ω+k(1)πω + +tmax +� +k=1 +Pω+1,ω+k+1(1)πω+1 + · · · +Leveraging property 2 in Lemma 1 and Lemma 2, we have +RHS = +ω−1 +� +i=τ+1 +� +i +� +k=τ+1 +Pi,tmax+k(1) +� +πi + +tmax +� +i=1 +� +Pω,ω+i(1) +� � ∞ +� +k=ω +πk +� +. +We define Π ≜ �∞ +i=ω πi. Then, equation (21) becomes the following. +Π = +ω−1 +� +i=τ+1 +� +i +� +k=τ+1 +Pi,tmax+k(1) +� +πi + +tmax +� +i=1 +� +Pω,ω+i(1) +� +Π. +(22) +Finally, replacing (20) with (22) and applying the definition of Π yield a system of linear +equations with finite size as presented in the theorem. +January 18, 2023 +DRAFT + +42 +APPENDIX E +PROOF OF COROLLARY 1 +We start with τ = 0. In this case, ω = tmax + 1 and the system of linear equations becomes +to the following. +π∆ = +∞ +� +i=0 +Pi,∆(1)πi = +� +� +� +� +� +� +� +� +� +� +� +P0,0(1)π0 + P1,0(1) +∞ +� +i=1 +πi +∆ = 0, +∆−1 +� +i=0 +Pi,∆(1)πi + P∆,∆(1) +∞ +� +i=∆ +πi +1 ≤ ∆ ≤ tmax. +(23) +Π = +tmax +� +i=1 +� +i +� +k=1 +Pi,tmax+k(1) +� +πi + +tmax +� +i=1 +Ptmax+1,tmax+1+i(1)Π. +(24) +ET +∞ +� +i=0 +πi = 1. +(25) +We first combine (23) and (25), which yields +π∆ = +� +� +� +� +� +� +� +� +� +� +� +P0,0(1)π0 + P1,0(1) +� 1 +ET − π0 +� +∆ = 0, +∆−1 +� +i=0 +Pi,∆(1)πi + P∆,∆(1) +� +1 +ET − +∆−1 +� +i=0 +πi +� +1 ≤ ∆ ≤ tmax. +Then, we have +π0 = +P1,0(1) +ET[1 − P0,0(1) + P1,0(1)]. +According to (24), we obtain +Π = +tmax +� +i=1 +� +i +� +k=1 +Pi,tmax+k(1) +� +πi +1 − +tmax +� +i=1 +Ptmax+1,tmax+1+i(1) +. +Then, we consider the case of τ = 1. In this case, ω = tmax + 2 and the system of linear +equations reduces to the following. +π0 = (1 − p)π0 + P1,0(1) +∞ +� +i=1 +πi. +(26) +π1 = pπ0 + P1,1(1) +∞ +� +i=1 +πi. +π∆ = +∆−1 +� +i=1 +Pi,∆(1)πi + P∆,∆(1) +∞ +� +i=∆ +πi, +2 ≤ ∆ ≤ tmax − 1. +January 18, 2023 +DRAFT + +43 +π∆ = +∆−1 +� +i=1 +Pi,∆(1)πi, +tmax ≤ ∆ ≤ tmax + 1. +(27) +Π = +tmax+1 +� +i=2 +� +i +� +k=2 +Pi,tmax+k(1) +� +πi + +tmax +� +i=1 +Ptmax+2,tmax+2+i(1)Π. +(28) +π0 + ET +∞ +� +i=1 +πi = 1. +(29) +We first combine (26) and (29), which yields +π0 = (1 − p)π0 + P1,0(1) +�1 − π0 +ET +� +. +Hence, we have +π0 = +P1,0(1) +pET + P1,0(1). +Similarly, +π1 = pP1,0(1) + pP1,1(1) +pET + P1,0(1) +. +For each 2 ≤ ∆ ≤ tmax − 1, +π∆ = +∆−1 +� +i=1 +Pi,∆(1)πi + P∆,∆(1) +� +1 − π0 +ET +− +∆−1 +� +i=1 +πi +� +. +(30) +According to the property 3 in Lemma 1 and Lemma 2, we know that P∆,∆(1) = 0 when +tmax ≤ ∆ ≤ tmax + 1. Hence, we can combine (27) and (30), which yields +π∆ = +∆−1 +� +i=1 +Pi,∆(1)πi + P∆,∆(1) +� +1 − π0 +ET +− +∆−1 +� +i=1 +πi +� +, +2 ≤ ∆ ≤ tmax + 1. +Finally, according to (28), we obtain +Π = +tmax+1 +� +i=2 +� +i +� +k=2 +Pi,tmax+k(1) +� +πi +1 − +tmax +� +i=1 +Ptmax+2,tmax+2+i(1) +. +January 18, 2023 +DRAFT + +44 +APPENDIX F +PROOF OF LEMMA 3 +We recall that Ck(∆) is defined as the expected AoII k time slots after the transmission +starts at state (∆, 0, −1), given that the transmission is still in progress. With this in mind, we +start with the case of ∆ = 0. As AoII either increases by one or decreases to zero, we know +Ck(0) ∈ {0, ..., k}. Then, we distinguish between the following cases. +• Ck(0) = 0 when the receiver’s estimate is correct k time slots after the transmission starts. +Since ∆ = 0, we can easily conclude that Ck(0) = 0 happens with probability p(k). +• Ck(0) = h, where 1 ≤ h ≤ k, happens when the receiver’s estimate is correct at the +(k − h)th time slot after the transmission starts, then, the source flips the state and stays in +the same state for the remaining h − 1 time slots. Hence, Ck(0) = h, where 1 ≤ h ≤ k, +happens with probability p(k−h)p(1 − p)h−1. +Combining together, we obtain +Ck(0) = +k +� +h=1 +hp(k−h)p(1 − p)h−1. +Then, we consider the case of ∆ > 0. In this case, the transmission starts when the receiver’s +estimate is incorrect and Ck(∆) ∈ {0, 1, ...k − 1, ∆ + k}. Then, we distinguish between the +following cases. +• Ck(∆) = 0 when the receiver’s estimate is correct at the kth time slot after the transmission +starts, which happens with probability (1 − p(k)). +• Ck(∆) = h, where h ∈ {1, 2, ..., k − 1}, happens when the receiver’s estimate is correct at +the (k−h)th slot after the transmission starts. Then, the source flips the state and stays in the +same state for the remaining h−1 time slots. Hence, Ck(∆) = h, where h ∈ {1, 2, ..., k−1}, +happens with probability (1 − p(k−h))p(1 − p)h−1. +• Ck(∆) = ∆ + k when the estimate at the receiver side is always wrong for k time slots +after the transmission starts. Since ∆ > 0 and the receiver’s estimate will not change, +Ck(∆) = ∆ + k happens with probability (1 − p)k. +Combining together, we obtain +Ck(∆) = +k−1 +� +h=1 +h(1 − p(k−h))p(1 − p)h−1 + (∆ + k)(1 − p)k, +∆ > 0. +January 18, 2023 +DRAFT + +45 +APPENDIX G +PROOF OF THEOREM 2 +We recall that when τ = ∞, the transmitter will never initiate any transmissions. Hence, +the receiver’s estimate will never change. Without loss of generality, we assume the receiver’s +estimate ˆXk = 0 for all k. The first step in calculating the expected AoII achieved by the +threshold policy with τ = ∞ is to calculate the stationary distribution of the induced DTMC. +To this end, we know that π∆ satisfies the following equations. +π0 = (1 − p)π0 + p +∞ +� +i=1 +πi. +(31) +π1 = pπ0. +π∆ = (1 − p)π∆−1, +∆ ≥ 2. +∞ +� +i=0 +πi = 1. +(32) +Combining (31) and (32) yields +π0 = (1 − p)π0 + p(1 − π0). +Hence, π0 = 1 +2. Then, we can get +π1 = p +2, +π∆ = (1 − p)∆−1π1 = p(1 − p)∆−1 +2 +, +∆ ≥ 2. +Combining together, we have +π0 = 1 +2, +π∆ = p(1 − p)∆−1 +2 +, +∆ ≥ 1. +Since the transmitter will never make any transmission attempts, the cost for being at state +(∆, 0, −1) is nothing but ∆ itself. Hence, the expected AoII is +¯∆∞ = +∞ +� +∆=1 +∆p(1 − p)∆−1 +2 += 1 +2p. +January 18, 2023 +DRAFT + +46 +APPENDIX H +PROOF OF THEOREM 3 +We recall that, for ∆ ≥ ω, π∆ satisfies +π∆ = +∆−1 +� +i=∆−tmax +Pi,∆(1)πi = +tmax +� +i=1 +Pi−tmax+∆−1,∆(1)πi−tmax+∆−1, +∆ ≥ ω. +We first focus on the system under Assumption 1. We know from by Lemma 1 that P∆,∆′(1) = +pt′P t′ +∆,∆′(1) where t′ = ∆′ − ∆ when ∆′ ≥ ω. Hence, +π∆ = +tmax +� +i=1 +ptmax+1−iP tmax+1−i +i−tmax+∆−1,∆(1)πi−tmax+∆−1, +∆ ≥ ω. +Renaming the variables yields +π∆ = +tmax +� +t=1 +ptP t +∆−t,∆(1)π∆−t, +∆ ≥ ω. +To proceed, we define, for each 1 ≤ t ≤ tmax, +π∆,t ≜ ptP t +∆−t,∆(1)π∆−t, +∆ ≥ ω. +(33) +Note that �tmax +t=1 π∆,t = π∆. Then, for a given 1 ≤ t ≤ tmax, we multiple both side of (33) by +C(∆ − t, 1) and sum over ∆ from ω to ∞. Hence, we have +∞ +� +i=ω +C(i − t, 1)πi,t = +∞ +� +i=ω +C(i − t, 1)ptP t +i−t,i(1)πi−t. +(34) +We define ∆′ +t ≜ C(∆, 1) − C(∆ − t, 1) where ∆ > t. Then, according to (11), we have +∆′ +t = +tmax +� +i=1 +pi +� +Ci(∆, 1) − Ci(∆ − t, 1) +� +. +According to Lemma 3, we have +Ci(∆ − t, 1) = ∆ − t + +i−1 +� +h=1 +�h−1 +� +k=1 +k(1 − p(h−k))p(1 − p)k−1 + (∆ − t + h)(1 − p)h +� +. +Ci(∆, 1) = ∆ + +i−1 +� +h=1 +�h−1 +� +k=1 +k(1 − p(h−k))p(1 − p)k−1 + (∆ + h)(1 − p)h +� +. +Subtracting the two equations yields +Ci(∆, 1) − Ci(∆ − t, 1) = t + +i−1 +� +h=1 +� +t(1 − p)h +� += t − t(1 − p)i +p +. +January 18, 2023 +DRAFT + +47 +Then, we have +∆′ +t = +tmax +� +i=1 +pi +�t − t(1 − p)i +p +� +. +We notice that ∆′ +t is independent of ∆ when ∆ > t. Hence, (34) can be rewritten as +∞ +� +i=ω +� +C(i, 1) − ∆′ +t +� +πi,t = +∞ +� +i=ω−t +C(i, 1)ptP t +i,i+t(1)πi. +Then, we define Πt ≜ �∞ +i=ω πi,t and Σt ≜ �∞ +i=ω C(i, 1)πi,t. We notice that P t +∆,∆+t(1) is +independent of ∆ when ∆ > 0. Hence, we obtain +∞ +� +i=ω +C(i, 1)πi,t − ∆′ +t +∞ +� +i=ω +πi,t = ptP t +1,1+t(1) +∞ +� +i=ω−t +C(i, 1)πi. +Plugging in the definitions yields +Σt − ∆′ +tΠt = ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +C(i, 1)πi + Σ +� +. +Summing the above equation over t from 1 to tmax yields +tmax +� +t=1 +� +Σt − ∆′ +tΠt +� += +tmax +� +t=1 +� +ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +C(i, 1)πi + Σ +�� +. +Rearranging the above equation yields +Σ − +tmax +� +t=1 +∆′ +tΠt = +tmax +� +t=1 +� +ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +C(i, 1)πi +�� ++ +tmax +� +t=1 +� +ptP t +1,1+t(1) +� +Σ. +(35) +Hence, the closed-form expression of Σ is +Σ = +tmax +� +t=1 +� +ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +C(i, 1)πi +� ++ ∆′ +tΠt +� +1 − +tmax +� +t=1 +� +ptP t +1,1+t(1) +� +. +In the following, we calculate Πt. Combining the definition of Πt with (33), we have +Πt ≜ +∞ +� +i=ω +πi,t = +∞ +� +i=ω +� +ptP t +i−t,i(1)πi−t +� += +∞ +� +i=ω−t +� +ptP t +i,i+t(1)πi +� +. +Since P t +∆,∆+t(1) is independent of ∆ when ∆ > 0, we have +Πt = ptP t +1,1+t(1) +� ω−1 +� +i=ω−t +πi + Π +� +. +Combining together, we recover the results for Assumptio 1 as presented in the first part of the +theorem. +January 18, 2023 +DRAFT + +48 +In the sequel, we focus on Assumption 2. To this end, we follow similar steps as detailed +above. We recall from Lemma 2, P∆,∆′(1) = pt′P t′ +∆,∆′(1)+pt+P t+ +∆,∆′(1) where t′ = ∆′ −∆ when +∆′ ≥ ω. Then, +π∆ = +tmax +� +i=1 +� +ptmax+1−iP tmax+1−i +∆−tmax+i−1,∆(1) + pt+P t+ +∆−tmax+i−1,∆(1) +� +π∆−tmax−1+i, +∆ ≥ ω. +Renaming the variables yields +π∆ = +tmax +� +t=1 +� +ptP t +∆−t,∆(1) + pt+P t+ +∆−t,∆(1) +� +π∆−t += +tmax +� +t=1 +Υ(∆, t)π∆−t, +∆ ≥ ω, +where Υ(∆, t) ≜ ptP t +∆−t,∆(1)+pt+P t+ +∆−t,∆(1). We notice that Υ(∆, t) is independent of ∆ when +∆ ≥ ω. To proceed, we define, for each 1 ≤ t ≤ tmax, +π∆,t ≜ Υ(∆, t)π∆−t, +∆ ≥ ω. +Note that �tmax +t=1 π∆,t = π∆. Then, for a given 1 ≤ t ≤ tmax, we have +∞ +� +i=ω +C(i − t, 1)πi,t = +∞ +� +i=ω +C(i − t, 1)Υ(i, t)πi−t. +(36) +We define ∆′ +t ≜ C(∆, 1) − C(∆ − t, 1) where ∆ > t. Then, according to (12), we have +∆′ +t = +tmax +� +i=1 +pi +� +Ci(∆, 1) − Ci(∆ − t, 1) +� ++ pt+ +� +Ctmax(∆, 1) − Ctmax(∆ − t, 1) +� +. +By Lemma 3, we have +Ci(∆ − t, 1) = ∆ − t + +i−1 +� +h=1 +�h−1 +� +k=1 +k(1 − p(h−k))p(1 − p)k−1 + (∆ − t + h)(1 − p)h +� +. +Ci(∆, 1) = ∆ + +i−1 +� +h=1 +�h−1 +� +k=1 +k(1 − p(h−k))p(1 − p)k−1 + (∆ + h)(1 − p)h +� +. +Subtracting the two equations yields +Ci(∆, 1) − Ci(∆ − t, 1) = t + +i−1 +� +h=1 +� +t(1 − p)h +� += t − t(1 − p)i +p +. +Then, we have +∆′ +t = +tmax +� +i=1 +pi +�t − t(1 − p)i +p +� ++ pt+ +�t − t(1 − p)tmax +p +� +, +1 ≤ t ≤ tmax. +January 18, 2023 +DRAFT + +49 +We notice that ∆′ +t = C(∆, 1) − C(∆ − t, 1) is independent of ∆ when ∆ > t. Hence, equation +(36) can be written as +∞ +� +i=ω +� +C(i, 1) − ∆′ +t +� +πi,t = +∞ +� +i=ω−t +C(i, 1)Υ(i + t, t)πi. +Then, we define Πt ≜ �∞ +i=ω πi,t and Σt ≜ �∞ +i=ω C(i, 1)πi,t. We recall that Υ(∆, t) is independent +of ∆ when ∆ ≥ ω. Hence, plugging in the definitions yields +Σt − ∆′ +tΠt = +ω−1 +� +i=ω−t +Υ(i + t, t)C(i, 1)πi + Υ(ω + t, t)Σ. +Summing the above equation over t from 1 to tmax yields +tmax +� +t=1 +� +Σt − ∆′ +tΠt +� += +tmax +� +t=1 +� ω−1 +� +i=ω−t +Υ(i + t, t)C(i, 1)πi + Υ(ω + t, t)Σ +� +. +Rearranging the above equation yields +Σ − +tmax +� +t=1 +∆′ +tΠt = +tmax +� +t=1 +� ω−1 +� +i=ω−t +Υ(i + t, t)C(i, 1)πi +� ++ +tmax +� +t=1 +Υ(ω + t, t)Σ. +Then, the closed-form expression of Σ is +Σ = +tmax +� +t=1 +�� ω−1 +� +i=ω−t +Υ(i + t, t)C(i, 1)πi +� ++ ∆′ +tΠt +� +1 − +tmax +� +t=1 +Υ(ω + t, t) +. +In the following, we calculate Πt. To this end, we have +Πt ≜ +∞ +� +i=ω +πi,t = +∞ +� +i=ω +Υ(i, t)πi−t = +∞ +� +i=ω−t +Υ(i + t, t)πi. +Since Υ(∆, t) is independent of ∆ if ∆ ≥ ω, we have +Πt = +ω−1 +� +i=ω−t +Υ(i + t, t)πi + Υ(ω + t, t)Π, +1 ≤ t ≤ tmax. +Combining together, we recover the results for the system under Assumption 2 as presented in +the second half of the theorem. +January 18, 2023 +DRAFT + +50 +APPENDIX I +PROOF OF LEMMA 5 +Leveraging Lemma 4, the result can be proved by mathematical induction. To start with, we +initialize Vγ,0(s) = 0 for all s. Hence, the base case (i.e., ν = 0) is true. Then, we assume the +monotonicity holds at iteration ν, and check whether the monotonicity still holds at iteration +ν + 1. We recall that the estimated value function Vγ,ν+1(s) is updated using (14). Hence, the +structural properties are embedded in the state transition probability Ps,s′(a). Using the state +transition probabilities in Appendix A, equation (14) for the state with ∆ > 0 can be written as +Vγ,ν+1(∆, t, i) = min +a∈{0,1} +� +∆ + γ +� +∆′,t′,i′ +Pr[(∆′, t′, i′) | (∆, t, i), a]Vγ,ν(∆′, t′, i′) +� += min +a∈{0,1} +� +∆ + γ +� +t′,i′ +� +Pr[(∆ + 1, t′, i′) | (∆, t, i), a]Vγ,ν(∆ + 1, t′, i′)+ +Pr[(0, t′, i′) | (∆, t, i), a]Vγ,ν(0, t′, i′) +�� +. +Moreover, for any ∆1 > 0 and ∆2 > 0, we have +Pr[(∆1 + 1, t′, i′) | (∆1, t, i), a] = Pr[(∆2 + 1, t′, i′) | (∆2, t, i), a]. +Pr[(0, t′, i′) | (∆1, t, i), a] = Pr[(0, t′, i′) | (∆2, t, i), a]. +Let V a +γ,ν+1(∆, t, i) be the resulting Vγ,ν+1(∆, t, i) when action a is chosen. Then, we have +V a +γ,ν+1(∆ + 1, t, i) − V a +γ,ν+1(∆, t, i) = +1 + γ +� +t′,i′ +� +Pr[(∆ + 1, t′, i′) | (∆, t, i), a] +� +Vγ,ν(∆ + 2, t′, i′) − Vγ,ν(∆ + 1, t′, i′) +�� +. +Combining with the assumption for iteration ν, we can easily conclude that V a +γ,ν+1(∆+1, t, i) ≥ +V a +γ,ν+1(∆, t, i) when ∆ > 0. Then, by mathematical induction, we can conclude that Lemma 5 +is true. +APPENDIX J +PROOF OF THEOREM 4 +We first define hγ(s) ≜ Vγ(s)−Vγ(sref) as the relative value function and choose the reference +state sref = (0, 0, −1). For simplicity, we abbreviate the reference state sref as 0 for the remainder +of this proof. Then, we show that M verifies the two conditions given in [21]. As a result, the +existence of the optimal policy is guaranteed. +January 18, 2023 +DRAFT + +51 +1) There exists a non-negative N such that −N ≤ hγ(s) for all s and γ: Leveraging Lemma 5, +we can easily conclude that hγ(s) is also increasing in ∆ when ∆ > 0. In the following, +we consider the policy φ being the threshold policy with τ = 0 as defined in Section IV. +Then, we know that policy φ induces an irreducible ergodic Markov chain and the expected +cost is finite. Let cs,s′(φ) be the expected cost of a first passage from s ∈ S to s′ ∈ S +when policy φ is adopted. Then, by [21, Proposition 4], we know that cs,0(φ) is finite. +Meanwhile, hγ(s) ≤ cs,0(φ) as is given in the proof of [21, Proposition 5]. Hence, we +have Vγ(0) − Vγ(s) ≤ c0,s(φ) and Vγ(0) − Vγ(s) = −hγ(s). Hence, we have hγ(s) ≥ +−c0,s(φ). Combining with the monotonicity proved in Lemma 5, we can choose N = +maxs∈G{c0,s(φ)}, where G = {s = (∆, t, i) : ∆ ∈ {0, 1}}. This condition indicates +that [21, Assumption 2] holds. +2) M has a stationary policy φ inducing an irreducible, ergodic Markov chain. Moreover, +the resulting expected cost is finite: We consider the policy φ being the threshold policy +with τ = 0. Then, according to Section IV, it induces an irreducible, ergodic Markov chain +and the resulting expected cost is finite. Then, according to [21, Proposition 5], we can +conclude that [21, Assumptions 1 and 3] hold. +As the two conditions are verified, the existence of the optimal policy is guaranteed by [21, +Theorem]. Moreover, the minimum expected cost is independent of the initial state. +APPENDIX K +PROOF OF THEOREM 5 +We inherit the definitions and notations introduced in Section V-A. We further define vγ,n(·) +as the minimum expected γ-discounted cost for operating the system from time 0 to time n − 1. +It is known that limn→∞ vγ,n(s) = Vγ(s), for all s ∈ S. We also define the expected cost under +policy φ as +Jφ(s) = lim sup +K→∞ +1 +K Eφ +�K−1 +� +k=0 +C(st) | s +� +, +and J(s) ≜ infφ Jφ(s) is the best that can be achieved. V (m) +φ,γ (s), V (m) +γ +(s), v(m) +γ,n (s), J(m) +φ +(s), +J(m)(s), and h(m) +γ +(s) are defined analogously for M(m). With the above definitions in mind, we +show that our system verifies the two assumptions given in [22]. +• Assumption 1: There exists a non-negative (finite) constant L, a non-negative (finite) function +M(·) on S, and constants m0 and γ0 ∈ [0, 1), such that −L ≤ h(m) +γ +(s) ≤ M(s), for s ∈ S(m), +January 18, 2023 +DRAFT + +52 +m ≥ m0, and γ ∈ (γ0, 1): L can be chosen in the same way as presented in the proof of +Theorem 5. More precisely, L = maxs∈G{h(m) +γ +(s)}, where G = {s = (∆, t, i) : ∆ ∈ {0, 1}}. +Let cs,0(φ) be the expected cost of a first passage from s ∈ S to the reference state 0 when +policy φ is adopted and c(m) +x,0 (φ) is defined analogously for M(m). In the following, we +consider the policy φ being the threshold policy with τ = ∞. We recall from Section V +that the policy φ induces an irreducible ergodic Markov chain, and the expected cost is finite. +Hence, h(m) +γ +(s) ≤ c(m) +s,0 (φ) by [21, Proposition 5] and cx,0(φ) is finite by [21, Proposition +4]. We also know from the proof of [22, Corollary 4.3] that cs,0(φ) satisfies the following +equation. +cs,0(φ) = C(s) + +� +s′∈S−{0} +P φ +ss′cs′,0(φ), +(37) +where P φ +ss′ is the state transition probability from state s to s′ under policy φ for M. P (m),φ +ss′ +is defined analogously for M(m). We can verify in a similar way to the proof of Lemma 5 +that cs,0(φ) is increasing in ∆ > 0. The proof is omitted here for the sake of space. Then, +� +y∈S(m) +−1 +P (m),φ +sy +cy,0(φ) = +� +y∈S(m) +−1 +P φ +sycy,0(φ) + +� +y∈S(m) +−1 +� +� +� +z∈S\S(m) +P φ +szqz(y) +� +� cy,0(φ) += +� +y∈S(m) +−1 +P φ +sycy,0(φ) + +� +z∈S\S(m) +P φ +sz +� +� +� +� +y∈S(m) +−1 +qz(y)cy,0(φ) +� +� +� +≤ +� +y∈S(m) +−1 +P φ +sycy,0(φ) + +� +z∈S\S(m) +P φ +szcz,0(φ) += +� +y∈S−{0} +P φ +sycy,0(φ), +(38) +where S(m) +−1 = S(m) − {0} and qs′(s) = 1{t′ = t; i′ = i}, which is an indicator function +with value 1 when the transitions to state s′ are redirected to state s. Otherwise, qs′(s) = 0. +Moreover, � +s∈S(m) +−1 qs′(s) = 1. Applying (38) to (37) yields +cs,0(φ) ≥ C(s) + +� +y∈S(m)−{0} +P (m),φ +sy +cy,0(φ). +Bearing in mind that c(m) +s,0 (φ) satisfies the following. +c(m) +s,0 (φ) = C(s) + +� +y∈S(m)−{0} +P (m),φ +sy +c(m) +y,0 (φ). +Hence, we can conclude that c(m) +s,0 (φ) ≤ cs,0(φ). Then, we can choose M(s) = cs,0(φ) < ∞. +January 18, 2023 +DRAFT + +53 +• Assumption 2: lim supm→∞ J(m) ≜ J∗ < ∞ and J∗ ≤ J(s) for all s ∈ S: We first show +that [22, Proposition 5.1] is true. Since we redistribute the transitions in a way such that, +for each s′ ∈ S − S(m), +� +y∈S(m) +qs′(y)vγ,n(y) = vγ,n(s), +where s = (m, t′, i′). Hence, We only need to verify that, for each s′ ∈ S − S(m) and +s = (m, t′, i′), +vγ,n(s) ≤ vγ,n(s′). +(39) +To this end, we notice that vγ,n(s) satisfies the following inductive form [22]. +vγ,n+1(s) = min +a +� +C(s) + γ +� +s′∈S +Ps,s′(a)vγ,n(s′) +� +. +By following similar steps to those in the proof of Lemma 5, we can prove the monotonicity +of vγ,n(s) for ∆ > 0 and n ≥ 0. The proof is omitted for the sake of space. Hence, (39) +is true since ∆′ > m > 0. Apparently, J(s) is finite for s ∈ S. Then, according to [22, +Corollary 5.2], assumption 2 is true. +Consequently, by [22, Theorem 2.2], we know +• There exists an average cost optimal stationary policy for M(m). +• Any limit point of the sequence of optimal policies for M(m) is optimal for M. +APPENDIX L +PROOF OF THEOREM 6 +The proof is based on the results presented in [23, pp. 42-43]. To this end, we consider a +generic MDP M = (S, A, P, C). Let C(s, A) be the instant cost for being at state s ∈ S under +policy A. We also define P A +s,s′ as the probability that applying policy A at state s will lead to +state s′. Finally, V A(s) is defined as the value function resulting from the operation of policy +A. Since B is chosen over A, we have +C(s, B) + +� +s′∈S +P B +s,s′V A(s′) ≤ C(s, A) + +� +s′∈S +P A +s,s′V A(s′), +s ∈ S. +Then, we define +γs ≜ C(s, B) + +� +s′∈S +P B +s,s′V A(s′) − C(s, A) − +� +s′∈S +P A +s,s′V A(s′) ≤ 0, +s ∈ S. +January 18, 2023 +DRAFT + +54 +Meanwhile, both policies satisfy their own Bellman equation. +V A(s) + θA = C(s, A) + +� +s′∈S +P A +s,s′V A(s′), +s ∈ S, +V B(s) + θB = C(s, B) + +� +s′∈S +P B +s,s′V B(s′), +s ∈ S, +where θA and θB are the expected costs resulting from the operation of policy A and policy B, +respectively. Then, subtracting the two expressions and bringing in the expression for γs yield +V B(s) − V A(s) + θB − θA = γs + +� +s′∈S +P B +s,s′(V B(s′) − V A(s′)), +s ∈ S. +Let V ∆(s) ≜ V B(s) − V A(s) and θ∆ ≜ θB − θA. Then, we have +V ∆(s) + θ∆ = γs + +� +s′∈S +P B +s,s′V ∆(s′), +s ∈ S. +We know that +θ∆ = +� +s∈S +πB +s γs, +where πB +s is the steady-state probability of state s under policy B. Since πB +s is non-negative and +γs is non-positive, we can conclude that θ∆ ≤ 0. Consequently, θB ≤ θA. +Then, we prove that the resulting policy is optimal when the policy improvement step con- +verges. To this end, we prove this by contradiction. We assume there exists two policies A and +B such that θB < θA. Meanwhile, the policy improvement step has converged to policy A. Since +the policy has converged, we know γs ≥ 0 for all s ∈ S. Hence, θ∆ ≥ 0. Then, according to the +definition of θ∆, we have θB ≥ θA, which contradicts the assumption. Hence, superior policies +cannot go undiscovered. Then, we can conclude that the resulting policy is optimal when the +policy improvement step converges. +APPENDIX M +PROOF OF THEOREM 7 +The general procedure for the optimality proof can be summarized as follows. +1) Policy Evaluation: We calculate the value function resulting from the adoption of the +threshold policy with τ = 1. +2) Policy Improvement: We apply the value functions obtained in the previous step to Bellman +equation and verify that the resulting policy remains the same. +In the following, we elaborate on these two steps. +January 18, 2023 +DRAFT + +55 +a) Policy Evaluation: We first calculate the value function and the expected AoII under the +threshold policy with τ = 1. For simplicity of notation, we denote the policy as φ. Let V φ(∆) be +the value function of state (∆, 0, −1) resulting from the operation of policy φ. Then, combining +(16) with the expression of P∆,∆′(a) in Lemma 1 and Lemma 2, V φ(∆) satisfies the following +system of linear equations. +V φ(0) = −θφ + pV φ(1) + (1 − p)V φ(0), +(40) +for Assumption 1, +V φ(∆) = C(∆, 1) − ETθφ + +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆,k(1)V φ(k) + P t +∆,∆+t(1)V φ(∆ + t) +�� +, +∆ ≥ 1, +for Assumption 2, +V φ(∆) = C(∆, 1)−ETθφ + +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆,k(1)V φ(k) + P t +∆,∆+t(1)V φ(∆ + t) +�� ++ pt+ +�tmax−1 +� +k=0 +P t+ +∆,k(1)V φ(k) + P t+ +∆,∆+tmax(1)V φ(∆ + tmax) +� +, +∆ ≥ 1, +where θφ is the expected AoII resulting from the adoption of φ. It is difficult to solve the above +system of linear equations directly for the exact solution. However, as we will see later, some +structural properties of the value function are sufficient. These properties are summarized in the +following lemma. +Lemma 6. V φ(∆) satisfies the following equations. +V φ(1) − V φ(0) = θφ +p , +V φ(∆ + 1) − V φ(∆) = σ, +∆ ≥ 1, +where for Assumption 1, +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� +1 − +tmax +� +t=1 +ppt(1 − p)t−1 +, +and, for Assumption 2, +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� ++ pt+ +�1 − (1 − p)tmax +p +� +1 − +�tmax +� +t=1 +ppt(1 − p)t−1 + pt+(1 − p)tmax +� +. +January 18, 2023 +DRAFT + +56 +Proof. First of all, from (40), we have +θφ = p(V φ(1) − V φ(0)) ⇒ V φ(1) − V φ(0) = θφ +p . +Then, we show that V φ(∆ + 1) − V φ(∆) is constant for ∆ ≥ 1. We start with Assumption +1. According to Theorem 4, the optimal policy exists. Hence, the iterative policy evaluation +algorithm [19, pp.74] can be used to solve the system of linear equations for V φ(s). Let V φ +ν (s) +be the estimated value function at iteration ν of the iterative policy evaluation algorithm. Without +loss of generality, we assume V φ +0 (∆) = 0 for all ∆. Then, the value function is updated in the +following way. +V φ +ν+1(∆) = C(∆, 1)−ETθφ+ +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆,k(1)V φ +ν (k) + P t +∆,∆+t(1)V φ +ν (∆ + t) +�� +, +∆ ≥ 1. +Then, we have limν→∞ V φ +ν (∆) = V φ(∆). Hence, we can prove the desired results using mathe- +matical induction. The base case ν = 0 is true by initialization. Then, we assume V φ +ν (∆ + 1) − +V φ +ν (∆) = σν where σν is independent of ∆ ≥ 1. Then, we will exam whether V φ +ν+1(∆ + 1) − +V φ +ν+1(∆) is independent of ∆ ≥ 1. Leveraging the properties in Lemma 1, we have +V φ +ν+1(∆ + 1) − V φ +ν+1(∆) +=C(∆ + 1, 1) − ETθφ + +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆+1,k(1)V φ +ν (k) + P t +∆+1,∆+1+t(1)V φ +ν (∆ + t + 1) +�� +− +C(∆, 1) + ETθφ − +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆,k(1)V φ +ν (k) + P t +∆,∆+t(1)V φ +ν (∆ + t) +�� +=C(∆ + 1, 1) − C(∆, 1) + +tmax +� +t=1 +� +ptP t +∆,∆+t(1)σν +� +. +According to Lemma 3, we have +C(∆ + 1, 1) − C(∆, 1) = +tmax +� +t=1 +� +Ct(∆ + 1, 1) − Ct(∆, 1) +� +. +In the case of ∆ ≥ 1, we have +Ct(∆ + 1, 1) − Ct(∆, 1) = 1 + +t−1 +� +k=1 +� +(k + ∆ + 1)(1 − p)k − (k + ∆)(1 − p)k +� += 1 − (1 − p)t +p +, +1 ≤ t ≤ tmax. +Combining together, we obtain +C(∆ + 1, 1) − C(∆, 1) = +tmax +� +t=1 +� +pt +1 − (1 − p)t +p +� +. +January 18, 2023 +DRAFT + +57 +Hence, we can conclude that V φ +ν+1(∆ + 1) − V φ +ν+1(∆) is independent of ∆ when ∆ ≥ 1. Then, +by mathematical induction, V φ(∆) − V φ(∆ + 1) is independent of ∆ when ∆ ≥ 1. We denote +by σ the constant. Then, σ satisfies the following equation. +σ = V φ(∆) − V φ(∆ + 1) = +tmax +� +t=1 +�pt − pt(1 − p)t +p ++ ptp(1 − p)t−1σ +� +. +After some algebraic manipulations, we obtain +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� +1 − +tmax +� +t=1 +ppt(1 − p)t−1 +. +Then, we show that V φ(∆+1)−V φ(∆) is independent of ∆ ≥ 1 under Assumption 2. Following +the same steps, we can prove the desired results by mathematical induction. The base case ν = 0 +is true by initialization. Then, we assume V φ +ν (∆ + 1) − V φ +ν (∆) = σν where σν is independent +of ∆ ≥ 1. The estimated value function is updated in the following way. +V φ +ν+1(∆) = C(∆, 1) − ETθφ + +tmax +� +t=1 +� +pt +� t−1 +� +k=0 +P t +∆,k(1)V φ +ν (k) + P t +∆,∆+t(1)V φ +ν (∆ + t) +�� ++ pt+ +�tmax−1 +� +k=0 +P t+ +∆,k(1)V φ +ν (k) + P t+ +∆,∆+tmax(1)V φ +ν (∆ + tmax) +� +, +∆ ≥ 1. +Then, we exam whether V φ +ν+1(∆ + 1) − V φ +ν+1(∆) is independent of ∆ ≥ 1. Leveraging the +properties in Lemma 2, we have +V φ +ν+1(∆ + 1) − V φ +ν+1(∆) = C(∆ + 1, 1) − C(∆, 1) + +tmax +� +t=1 +ptP t +∆,∆+t(1)σφ +ν + pt+P t+ +∆,∆+tmax(1)σφ +ν . +Moreover, according to the expressions in Lemma 2, we obtain +tmax +� +t=1 +ptP t +∆,∆+t(1) + pt+P t+ +∆,∆+tmax(1) = +tmax +� +t=1 +ptp(1 − p)t−1 + pt+(1 − p)tmax, +which is independent of ∆ ≥ 1. Leveraging the expression of C(∆, 1) in Lemma 3, we obtain +C(∆, 1) − C(∆ − 1, 1) = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� ++ pt+ +�1 − (1 − p)tmax +p +� +. +We notice that C(∆, 1) − C(∆ − 1, 1) is also independent of ∆ ≥ 1. Consequently, we can +conclude that V φ +ν+1(∆+1)−V φ +ν+1(∆) is independent of ∆ ≥ 1. Then, by mathematical induction, +January 18, 2023 +DRAFT + +58 +V φ(∆ + 1) − V φ(∆) is independent of ∆ ≥ 1. We denote the constant by σ, which satisfies the +following equation. +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� ++ pt+ +�1 − (1 − p)tmax +p +� ++ +�tmax +� +t=1 +ptp(1 − p)t−1 + pt+(1 − p)tmax +� +σ. +After some algebraic manipulations, we obtain +σ = +tmax +� +t=1 +pt +�1 − (1 − p)t +p +� ++ pt+ +�1 − (1 − p)tmax +p +� +1 − +�tmax +� +t=1 +ptp(1 − p)t−1 + pt+(1 − p)tmax +� +. +With Lemma 6 in mind, we can continue to the next step. +b) Policy Improvement: Here, we show that the optimal policy resulting from V φ(∆) and +θφ is threshold policy with τ = 1. To this end, we define δV φ(∆) ≜ V φ,0(∆) − V φ,1(∆), +where V φ,a(∆) is the value function resulting from taking action a at state (∆, 0, −1). Then, the +optimal action at state (∆, 0, −1) is a = 1 if δV φ(∆) ≥ 0. Otherwise, a = 0 is optimal. Then, +we investigate the expression of δV φ(∆). We first notice that, for ∆ ≥ 1, V φ(∆) = V φ,1(∆). +Then, using Lemma 6, we obtain +δV φ(∆) =∆ − θφ + (1 − p)V φ(∆ + 1) + pV (0) − V φ,1(∆) +=∆ − θφ + (1 − p)V φ(∆ + 1) + pV (0) − V φ(∆) +=∆ − θφ + (1 − p)(V φ(∆ + 1) − V φ(∆)) + p(V φ(0) − V φ(∆)) +=∆ − 2θφ + [(1 − p) − p(∆ − 1)]σ, +where ∆ ≥ 1. We notice that +δV φ(∆ + 1) − δV φ(∆) = 1 − pσ. +January 18, 2023 +DRAFT + +59 +For Assumption 1, plugging in the expression of σ yields +1 − pσ =1 − +tmax +� +t=1 +(pt − pt(1 − p)t) +1 − +tmax +� +t=1 +ptp(1 − p)t−1 += +1 − +tmax +� +t=1 +ptp(1 − p)t−1 − +tmax +� +t=1 +(pt − pt(1 − p)t) +1 − +tmax +� +t=1 +ptp(1 − p)t−1 += +(1 − 2p) +tmax +� +t=1 +pt(1 − p)t−1 +1 − +tmax +� +t=1 +ptp(1 − p)t−1 +≥ 0. +For Assumption 2, we have +1 − pσ =1 − +tmax +� +t=1 +pt(1 − (1 − p)t) + pt+(1 − (1 − p)tmax) +1 − +�tmax +� +t=1 +ptp(1 − p)t−1 + pt+(1 − p)tmax +� +≥1 − +tmax +� +t=1 +pt(1 − (1 − p)t) + pt+(1 − (1 − p)tmax) +1 − +�tmax +� +t=1 +pt(1 − p)t + pt+(1 − p)tmax +� += 0. +Consequently, when ∆ ≥ 1, δV φ(∆ + 1) ≥ δV φ(∆) for both assumptions. We notice that +δV φ(1) = 1 − 2θφ + (1 − p)σ. According to Condition 1, θφ = ¯∆1 ≤ 1+(1−p)σ +2 +. Hence, we have +δV φ(1) = 1 − 2 ¯∆1 + (1 − p)σ ≥ 0. +Combining together, we have +δV φ(∆) ≥ δV φ(1) ≥ 0, +∆ ≥ 1. +Hence, the optimal action at state (∆, 0, −1) where ∆ ≥ 1 is to initiate the transmission (i.e., +a = 1). Now, the only missing part is the action at state (0, 0, −1). To determine the action, +we recall from Theorem 6 that the new policy will always be no worse than the old one. +Meanwhile, by Condition 1, ¯∆1 ≤ ¯∆0. Hence, the optimal action at state (0, 0, −1) is to stay +January 18, 2023 +DRAFT + +60 +idle (i.e., a = 0). Combining with the optimal actions at other states, we can conclude that +the policy improvement step yields the threshold policy with τ = 1. Consequently, the policy +iteration algorithm converges. Then, according to Theorem 6, the threshold policy with τ = 1 +is optimal. +January 18, 2023 +DRAFT + diff --git a/5NE6T4oBgHgl3EQflhFb/content/tmp_files/load_file.txt b/5NE6T4oBgHgl3EQflhFb/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..e02e887a9985fa6cc30aec94a247cace17e01b2d --- /dev/null +++ b/5NE6T4oBgHgl3EQflhFb/content/tmp_files/load_file.txt @@ -0,0 +1,1473 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf,len=1472 +page_content='1 Minimizing Age of Incorrect Information over a Channel with Random Delay Yutao Chen and Anthony Ephremides Department of Electrical and Computer Engineering, University of Maryland Abstract We investigate a transmitter-receiver pair in a slotted-time system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The transmitter observes a dynamic source and sends updates to a remote receiver through a communication channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We assume that the channel is error-free but suffers a random delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We consider two more practical cases to facilitate the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the first case, the update is guaranteed to be delivered within a certain number of time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the second case, once the transmission time exceeds a predetermined value, the update is immediately discarded, leaving the channel free for a new transmission on demand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The receiver will maintain an estimate of the current state of the dynamic source using the received updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this paper, we adopt the Age of Incorrect Information (AoII) as the performance metric and investigate the problem of optimizing the transmitter’s action in each time slot to minimize AoII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We first characterize the optimization problem using the Markov decision process and investigate the performance of the threshold policy, under which the transmitter transmits updates only when the AoII exceeds the threshold τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By delving into the characteristics of the system evolution, we precisely compute the expected AoII achieved by the threshold policy using the Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we prove that the optimal policy exists and provide a computable relative value iteration algorithm to estimate the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Next, by leveraging the policy improvement theorem, we prove that, under an easy-to-verify condition, the optimal policy is the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, numerical results are laid out to highlight the performance of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' INTRODUCTION Communication systems are used in all aspects of our lives and play an increasingly impor- tant role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, communication systems are being asked to play more roles than just disseminating words, sounds, and images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With the widespread deployment of communication systems and the continuous expansion of their purposes, we have to demand higher performance from the communication systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, we wonder whether traditional metrics such as January 18, 2023 DRAFT arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='06150v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='IT] 15 Jan 2023 2 throughput and latency could continue to meet such demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' One of the major drawbacks of such traditional metrics is that they treat each update equally and ignore that not every update can provide the receiver with equally important information for communication purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Because of this, researchers seek to reconsider existing communication paradigms and look for new ones, among which semantic communication is an important attempt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The semantics of information is formally defined in [1] as the significance of the messages relative to the purpose of the data exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, semantic communication is regarded as ”the provisioning of the right piece of information to the right point of computation (or actuation) at the right point in time”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Different from the classical metrics in data communication, semantic metrics incorporate the freshness of information, which is becoming increasingly important as real-time monitoring systems are ubiquitous in modern society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Typically in such systems, a monitor monitors one or more events simultaneously and transmits updates to allow one or more receivers at a distance to have a good knowledge of the events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, the timeliness of information is often one of the most important performance indicators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The Age of Information (AoI), first introduced in [1], is one of the most successful examples of capturing information freshness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' AoI tracks the time elapsed since the generation of the last received update, which results in different treatments for different updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For example, when the update is significantly fresher, it will be more important and worth the extra resources to transmit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V (t) be the generation time of the last update received up to time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, AoI at time t is defined by ∆AoI(t) = t − V (t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' After the introduction, AoI has attracted extensive attention [2]–[5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, AoI assumes that the age of each update always increases over time, ignoring the information content of the update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Such neglect is not always desirable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For example, in a remote monitoring system, the updates that provide the remote monitor with accurate information about the source process it is interested in should be considered fresh, even if the update was generated earlier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This limitation leads to its poor performance in the problem of remote estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For example, we want to estimate a rapidly changing event remotely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, a small AoI does not necessarily mean that the receiver has accurate information about the event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Likewise, receivers can make relatively accurate estimates without timely information when events change slowly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Inspired by the above limitation, the Age of Incorrect Information (AoII) is introduced in [6], which combines the timeliness of updates and the information content they convey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More specif- ically, AoII combines the degree of information mismatch between the receiver and the source and the aging process of mismatched information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to the definition given in [6], January 18, 2023 DRAFT 3 AoII captures the aging process of conflicting information through a time penalty function that quantifies the time elapsed since the last time the receiver has the perfect information about the source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The mismatch between the receiver’s information and the source is captured by the information penalty function, which quantifies the degree of information mismatch between the two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Because of the flexibility of the penalty functions, AoII can be adapted to various systems and communication objectives by choosing different penalty functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the introduction of AoII, many works have been done to reveal its fundamental nature and performance in various communication systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' AoII minimization under resource con- straints is investigated first.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In [6], the authors investigate the minimization of AoII when there is a limit on the average number of transmissions allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the authors extend the results to the case of the generic time penalty function in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, in both papers, the measure of information mismatch is binary, either true or false.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In [8], the authors investigate a similar system setting, but the AoII considers the quantified information mismatch between the source and the receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' AoII in the context of scheduling is another critical problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In scheduling problems, a base station observes multiple events and needs to select a part of the users to update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under these general settings, [9] investigates the problem of minimizing AoII when the channel state information is available and the time penalty function is generic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The authors of [10] consider a similar system, but the base station cannot know the states of the events before the transmission decision is made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In real-life applications, we usually have no knowledge of the statistical model of the source process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, the authors in [11] investigate the problem of minimizing AoII for an unknown Markovian source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The relationship between the estimation error and AoII is studied in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, a variant of AoII - Age of Incorrect Estimates is introduced and studied in [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Communication channels usually suffer random delays due to various influences in real-life applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under this system setup, the authors of [14] compare the performances of AoII, AoI, and real-time error through extensive numerical simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This paper considers a similar system setup, but we investigate the problem from a theoretical perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We accurately calculate the expected AoII achieved by some canonical policies, which enables us to solve the problem of minimizing AoII over a channel with random delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Communication channel with a random delay has also been studied in the context of remote estimation and AoI [15]–[18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, the problem considered in this paper is very different, as AoII is a combination of age-based metrics frameworks and error-based metrics frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The main contributions of this paper can be summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) We investigate the January 18, 2023 DRAFT 4 AoII minimization problem in a system where the communication channel suffers a random delay and characterize the optimization problem using the Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) We analyze the characteristics of the threshold policy, under which the transmitter initiates transmission only when AoII exceeds the threshold, and calculate the expected AoII achieved by the threshold policy precisely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 4) We prove the existence of the optimal policy and introduce a computable value iteration algorithm to estimate the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 5) We theoretically find the optimal policy using the policy improvement theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The remainder of this paper is organized in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We introduce the system model and the optimization problem in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, Section III characterizes the problem using the Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In Section IV-C, we theoretically analyze and calculate the expected AoII achieved by the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we show the existence of the optimal policy, provide the value iteration algorithm to estimate the optimal policy, and theoretically find the optimal policy using the policy improvement theorem in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, Section VI concludes the paper with numerical results that highlight the performance of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' SYSTEM OVERVIEW A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' System Model We consider a slotted-time system, where a transmitter observes a dynamic source and needs to decide when to send updates to a remote receiver so that the receiver can have a good knowledge of the current state of the dynamic source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The dynamic source is modeled by a two-state symmetric Markov chain with state transition probability p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The transmitter receives an update from the dynamic source at the beginning of each time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The update at time slot k is denoted by Xk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The old update will be discarded upon the arrival of a new one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the transmitter will decide whether to transmit the new update based on the current system status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the channel is idle, the transmitter chooses between transmitting the new update and staying idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the channel is busy, the transmitter cannot do anything other than stay idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The updates will be transmitted through an error-free communication channel that suffers a random delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In other words, the update will not be corrupted during the transmission, but each transmission will take a random amount of time T ∈ N∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote by pt ≜ Pr(T = t) the probability mass function (PMF) and assume T is independent and identically distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When a transmission finishes, the communication channel is immediately available for the subsequent transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 5 0 1 Dynamic Source T ransmitter (transmit or stay idle) Channel Receiver (estimate & feedback) Xk Xk−T ˆ Xk F eedback p p 1 − p 1 − p Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1: An illustration of the system model, where Xk and ˆXk are the state of the dynamic source and the receiver’s estimate at time slot k, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The receiver maintains an estimate of the current state of the dynamic source and modifies its estimate every time a new update is received.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote by ˆXk the receiver’s estimate at time slot k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to [18], the best estimator when p ≤ 1 2 is the last received update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When p > 1 2, the optimal estimator depends on the realization of transmission time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this paper, we consider only the case of p ≤ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver uses the last received update as the estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For the case of p > 1 2, the results can be extended using the corresponding best estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The receiver uses ACK/NACK packets to inform the transmitter of its reception of the new update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As is assumed in [6], the transmitter receives the ACK/NACK packets reliably and instantaneously because the packets are generally very small compared to the size of the status updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When ACK is received, the transmitter knows that the receiver’s estimate changes to the last sent update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When NACK is received, the transmitter knows that the receiver’s estimate does not change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this way, the transmitter always knows the current estimate on the receiver side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A illustration of the system model is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' At the beginning of time slot k, the transmitter receives the update Xk from the dynamic source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the transmitter decides whether to transmit this update based on the system status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the transmitter decides not to start transmission, it will stay idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, the transmitter will transmit the update through the communication channel, where the transmission of the update takes a random amount of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Thus, the update received by the receiver has a delay of several time slots (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', Xk−T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the receiver will modify its estimation ˆ Xk based on the received update and send an ACK packet to inform the transmitter of its reception of the update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Age of Incorrect Information The system adopts the Age of Incorrect Information (AoII) as the performance metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We first define Uk as the last time slot up to time slot k in which the receiver’s estimate is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 6 Mathematically, Uk ≜ max{h : h ≤ k, Xh = ˆXh}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, in a slotted-time system, AoII at time slot k can be written as ∆AoII(Xk, ˆXk, k) = k � h=Uk+1 � g(Xh, ˆXh)F(h − Uk) � , (1) where g(Xk, ˆXk) is the information penalty function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' F(k) ≜ f(k)−f(k −1) where f(k) is the time penalty function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this paper, we choose g(Xk, ˆXk) = |Xk − ˆXk| and f(k) = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, F(k) = 1 and g(Xk, ˆXk) ∈ {0, 1} as the dynamic source has two states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, equation (1) can be simplified as ∆AoII(Xk, ˆXk, k) = k − Uk ≜ ∆k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We can easily conclude from the simplified expression that, under the chosen penalty functions, AoII increases at the rate of 1 per time slot when the receiver’s estimate is incorrect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, AoII is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Next, we characterize the evolution of ∆k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we divide the evolution into the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When Xk+1 = ˆXk+1, we have Uk+1 = k + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by definition, ∆k+1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When Xk+1 ̸= ˆXk+1, we have Uk+1 = Uk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by definition, ∆k+1 = k+1−Uk = ∆k+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we have ∆k+1 = 1{Xk+1 ̸= ˆXk+1}(∆k + 1), (2) where 1{A} is the indicator function, whose value is one when event A occurs and zero otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A sample path of ∆k is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Now that the evolution of AoII has been clarified, we further discuss the system’s evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' System Dynamics In this subsection, we tackle the system dynamics, which plays a key role in later sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that the system’s status at the beginning of time slot k can be fully captured by the triplet sk ≜ (∆k, tk, ik) where tk ∈ N0 indicates the time the current transmission has been in progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define tk = 0 if there is no transmission in progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ik ∈ {−1, 0, 1} indicates the state of the channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define ik = −1 when the channel is idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ik = 0 if the channel is busy and the transmitting update is the same as the receiver’s current estimate, and ik = 1 when the transmitting update is different from the receiver’s current estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 7 X1 = 1 ˆ X1 = 1 X2 = 0 ˆ X2 = 1 X3 = 1 ˆ X3 = 1 X4 = 0 ˆ X4 = 1 X5 = 0 ˆ X5 = 0 X6 = 1 ˆ X6 = 0 X7 = 1 ˆ X7 = 0 X8 = 1 ˆ X8 = 0 X9 = 1 ˆ X9 = 1 X10 = 0 ˆ X10 = 1 X11 = 0 ˆ X11 = 1 1 2 3 T1 D1/T2 D2 T3 D3 t ∆t Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2: A sample path of ∆k, where Ti and Di are the transmission start time and the delivery time of the i-th update, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' At T1, the transmitted update is X3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that the transmission decisions in the plot are taken randomly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to the definitions of tk and ik, ik = −1 if and only if tk = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the channel is idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, characterizing the system dynamics is equivalent to characterizing the value of sk+1 using sk and the transmitter’s action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote the transmitter’s decision by ak ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define ak = 0 when the transmitter decides not to initiate a transmission and ak = 1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the system dynamics can be fully characterized by Psk,sk+1(ak), which is defined as the probability that action ak at sk leads to sk+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We will revisit Psk,sk+1(ak) with an in-depth discussion in future sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Problem Formulation We define a policy φ as the one that specifies the transmitter’s decision in each time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This paper aims to find the policy that minimizes the expected AoII of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Mathematically, the problem can be formulated as the following optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' arg min φ ∈ Φ lim K→∞ 1 K Eφ �K−1 � k=0 ∆k � , (3) where Eφ is the conditional expectation, given that policy φ is adopted, and Φ is the set of all admissible policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 8 Definition 1 (Optimal policy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A policy is said to be optimal if it yields the minimal expected AoII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the next section, we will characterize the problem reported in (3) using a Markov Decision Process (MDP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' MARKOV DECISION PROCESS CHARACTERIZATION The minimization problem reported in (3) can be characterized by an infinite horizon with average cost MDP M, which consists of the following components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The state space S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The state s = (∆, t, i) is the triplet defined in Section II-C without the time stamp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For the remainder of this paper, we will use s and (∆, t, i) to represent the state interchangeably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The action space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When i = −1, the feasible action is a ∈ {0, 1} where a = 0 if the transmitter decides not to initiate a new transmission and a = 1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When i ̸= −1, the feasible action is a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The state transition probability P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The probability that action a at state s leads to state s′ is denoted by Ps,s′(a), whose value will be discussed in the following subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The immediate cost C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The immediate cost for being at state s is C(s) = ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V (s) be the value function of state s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' It is well known that the value function satisfies the Bellman equation [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V (s) + θ = min a∈A � C(s) + � s′∈S Ps,s′(a)V (s′) � , s ∈ S, (4) where θ is the expected AoII achieved by the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We will write V (s) as V (∆, t, i) in some parts of this paper to better distinguish between states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The state transition probability is essential for solving the Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we delve into Ps,s′(a) in the following subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' State Transition Probability We recall that Ps,s′(a) is the probability that action a at state s will lead to state s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To make it easier to follow, we first characterize separately the transitions of the three elements that make up the state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 9 ∆′ can be 0 or ∆ + 1, depending on whether the receiver’s estimate at state s′ is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The specific evolution is given by (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' t′ can be t + 1 or 0, depending on whether there is a transmission in progress at state s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' i′ = −1 if and only if t′ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Other than that, i′ can be 0 or 1, depending on whether the transmitting update is the same as the receiver’s estimate at state s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With the individual transitions, we proceed to discuss their combined transitions and the cor- responding probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we define Pr(T > k + 1 | t) as the probability that the current transmission will take more than t + 1 time slots, given that the current transmission has been in progress for t time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr(T > t + 1 | t) = 1 − Pr(T ≤ t + 1) Pr(T > t) = 1 − Pt+1 1 − Pt , where Pt ≜ �t k=1 pk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the individual transitions and Pr(T > t + 1 | t), Ps,s′(a) can be obtained easily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For the sake of space, the complete state transition probabilities are provided in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that we do not impose any restrictions on the update transmission time, which would make the theoretical analysis very difficult and would also lead to long channel occupancy by a single update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, in order to ease the theoretical analysis and to be closer to the practice, we consider the following two independent assumptions1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assumption 1: We assume that the update will always be delivered and the transmission lasts for at most tmax time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, we assume 1 ≤ T ≤ tmax and tmax � t=1 pt = 1, pt ≥ 0, 1 ≤ t ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In practice, we can make the probability of the transmission time exceeding tmax negligible by choosing a sufficiently large tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assumption 2: We assume the transmission can last for a maximum of tmax time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' At the end of the tmaxth time slot, the update will be discarded if not delivered, and the channel will be available for a new transmission immediately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define pt+ ≜ �∞ t=tmax+1 pt as the probability that the update will be discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In practice, similar techniques, such as time-to-live (TTL) [20], are used to prevent an update from occupying the channel for too long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1The results presented in this paper apply to both assumptions unless stated otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 10 Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' tmax is a predetermined system parameter and is not a parameter to be optimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When tmax = 1, the system reduces to the one considered in [6], according to which the optimal policy is to transmit a new update whenever possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, in the rest of this paper, we focus on the case of tmax > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under both assumptions, the transmission will last at most tmax time slots, and the channel will be immediately available for a new transmission when the current transmission finishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the state space S is reduced as t is now bounded by 1 ≤ t ≤ tmax − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the state transition probabilities in Appendix A will be adjusted as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 1, updates are bound to be delivered after tmax time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr(T > t + 1 | t) = 0 for t ≥ tmax − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 2, updates will be discarded at the end of the tmaxth time slot if not delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, s′ = (∆′, tmax, i′) will be replaced by s′ = (∆′, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Having clarified the state transition probabilities, we evaluate a canonical policy in terms of the achieved expected AoII in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' POLICY PERFORMANCE ANALYSIS As is proved in [6]–[8], the AoII-optimal policy often has a threshold structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we consider the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Definition 2 (Threshold policy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under threshold policy τ, the transmitter will initiate a trans- mission only when the current AoII is no less than threshold τ ∈ N0 and the channel is idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let aτ(s) be the action at state s suggested by the threshold policy τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, aτ(s) = 1{∆ ≥ τ and i = −1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define τ ≜ ∞ as the policy under which the transmitter never initiates any transmissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that the system dynamics under threshold policy can be characterized by a discrete- time Markov chain (DTMC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Without loss of generality, we assume the DTMC starts at state (0, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the state space of the Markov chain SMC consists of all the states accessible from state (0, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since state (0, 0, −1) is positive recurrent and communicates with each January 18, 2023 DRAFT 11 state s ∈ SMC, the stationary distribution exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let πs be the steady-state probability of state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, πs satisfies the following balance equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' πs = � s′∈SMC Ps′,s(a)πs′, s ∈ SMC, where Ps′,s(a) is the single-step state transition probability as define in Section III, and the action a depends on the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the first step in calculating the expected AoII achieved by the threshold policy is to calculate the stationary distribution of the induced DTMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, the problem arises as the state space SMC is infinite and intertwined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To simplify the state transitions, we recall that the transmitter can only stay idle (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', a = 0) when the channel is busy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let SMC −1 = {s = (∆, t, i) : i ̸= −1} be the set of the state where the channel is busy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, for s′ ∈ SMC −1 , Ps′,s(a) = Ps′,s(0) and is independent of the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, for any threshold policy and each s ∈ S \\ SMC −1 , we can repeatedly replace πs′, where s′ ∈ SMC −1 , with the corresponding balance equation until we get the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' πs = � s′∈S\\SMC −1 P∆′,∆(a)πs′, s ∈ S \\ SMC −1 , (5) where P∆′,∆(a) is the multi-step state transition probability from state s′ = (∆′, 0, −1) to state s = (∆, 0, −1) under action a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For simplicity, we write (5) as π∆ = � ∆′≥0 P∆′,∆(a)π∆′, ∆ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (6) As we will see in the following subsections, π∆ is sufficient to calculate the expected AoII obtained by any threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the next subsection, we derive the expression of P∆,∆′(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Multi-step State Transition Probability We start with the case of a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, no update will be transmitted, and P∆,∆′(0) is independent of the transmission delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to Appendix A, P0,∆′(0) = � � � 1 − p ∆′ = 0, p ∆′ = 1, and for ∆ > 0, P∆,∆′(0) = � � � p ∆′ = 0, 1 − p ∆′ = ∆ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 12 In the sequel, we focus on the case of a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define P t ∆,∆′(a) as the probability that action a at state s = (∆, 0, −1) will lead to state s′ = (∆′, 0, −1), given that the transmission takes t time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, under Assumption 1, P∆,∆′(1) = tmax � t=1 ptP t ∆,∆′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, it is sufficient to obtain the expressions of P t ∆,∆′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we define p(t) as the probability that the dynamic source will remain in the same state after t time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the Markov chain is symmetric, p(t) is independent of the state and can be calculated by p(t) = � � � �1 − p p p 1 − p � � t� � 11 , where the subscript indicates the row number and the column number of the target probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For the consistency of notation, we define p(0) ≜ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 1, P∆,∆′(1) = tmax � t=1 ptP t ∆,∆′(1), (7) where P t 0,∆′(1) = � � � � � � � � � � � p(t) ∆′ = 0, p(t−k)p(1 − p)k−1 1 ≤ ∆′ = k ≤ t, 0 otherwise, and for ∆ > 0, P t ∆,∆′(1) = � � � � � � � � � � � � � � � � � � � � � � � � � p(t) ∆′ = 0, (1 − p(t−1))(1 − p) ∆′ = 1, (1 − p(t−k))p2(1 − p)k−2 2 ≤ ∆′ = k ≤ t − 1, p(1 − p)t−1 ∆′ = ∆ + t, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 1, equation (7) can be written equivalently as P∆,∆′(1) = � � � � � � � � � � � � � � � � � � � tmax � t=∆′ ptP t ∆,∆′(1) 0 ≤ ∆′ ≤ tmax − 1, ∆ ≥ ∆′, tmax � t=∆′ ptP t ∆,∆′(1) + pt′P t′ ∆,∆′(1) 0 ≤ ∆′ ≤ tmax − 1, ∆ < ∆′, pt′P t′ ∆,∆′(1) ∆′ ≥ tmax, January 18, 2023 DRAFT 13 where t′ ≜ ∆′ − ∆ and P t′ ∆,∆′(1) ≜ 0 when t′ ≤ 0 or when t′ > tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, P∆,∆′(1) possesses the following properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) P∆,∆′(1) is independent of ∆ when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ ∆′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) P∆,∆′(1) = P∆+δ,∆′+δ(1) when ∆′ ≥ tmax and ∆ ≥ 0 for any δ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 3) P∆,∆′(1) = 0 when ∆′ > ∆ + tmax or when tmax − 1 < ∆′ < ∆ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The expression of P t ∆,∆′(1) is obtained by analyzing the system dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The state transition probabilities under Assumption 2 can be obtained similarly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we define P t+ ∆,∆′(a) as the probability that action a at state s = (∆, 0, −1) will result in state s′ = (∆′, 0, −1), given that the transmission is terminated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P∆,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) = tmax � t=1 ptP t ∆,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) + pt+P t+ ∆,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (8) where P t 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) = � � � � � � � � � � � p(t) ∆′ = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p(t−k)p(1 − p)k−1 1 ≤ ∆′ = k ≤ t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 otherwise,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) = P tmax 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′ (1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' and for ∆ > 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) = � � � � � � � � � � � � � � � � � � � � � � � � � p(t) ∆′ = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (1 − p(t−1))(1 − p) ∆′ = 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (1 − p(t−k))p2(1 − p)k−2 2 ≤ ∆′ = k ≤ t − 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p(1 − p)t−1 ∆′ = ∆ + t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 otherwise,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ ∆,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='∆′(1) = � � � � � � � � � � � � � � � � � � � 1 − p(tmax) ∆′ = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (1 − p(tmax−k))p(1 − p)k−1 1 ≤ ∆′ = k ≤ tmax − 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (1 − p)tmax ∆′ = ∆ + tmax,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 14 Under Assumption 2, equation (8) can be written equivalently as P∆,∆′(1) = � � � � � � � � � � � � � � � � � � � � � � � � � � � tmax � t=∆′ ptP t ∆,∆′(1) + pt+P t+ ∆,∆′(1) 0 ≤ ∆′ ≤ tmax − 1, ∆ ≥ ∆′, tmax � t=∆′ ptP t ∆,∆′(1) + pt′P t′ ∆,∆′(1) + pt+P t+ ∆,∆′(1) 0 ≤ ∆′ ≤ tmax − 1, ∆ < ∆′, pt′P t′ ∆,∆′(1) + pt+P t+ ∆,∆′(1) ∆′ ≥ tmax, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, P∆,∆′(1) possesses the following properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) P∆,∆′(1) is independent of ∆ when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ max{1, ∆′}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) P∆,∆′(1) = P∆+δ,∆′+δ(1) when ∆′ ≥ tmax and ∆ > 0 for any δ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 3) P∆,∆′(1) = 0 when ∆′ > ∆ + tmax or when tmax − 1 < ∆′ < ∆ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The proof follows similar steps as presented in the proofs of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As the expressions and properties of P∆,∆′(a) under both assumptions are clarified, we solve for π∆ in the next subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Stationary Distribution Let ET be the expected transmission time of an update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the channel remains idle if no transmission is initiated and the expected transmission time of an update is ET, π∆ satisfies the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' τ−1 � ∆=0 π∆ + ET ∞ � ∆=τ π∆ = 1, (9) where ET = �tmax t=1 tpt under Assumption 1 and ET = �tmax t=1 tpt +tmaxpt+ under Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that there is still infinitely many π∆ to calculate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To overcome the infinity, we recall that, under threshold policy, the suggested action is a = 1 for all the state (∆, 0, −1) with ∆ ≥ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we define Π ≜ �∞ ∆=ω π∆ where ω ≜ tmax + τ + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As we will see in the following subsections, Π and π∆ for 0 ≤ ∆ < ω − 1 are sufficient for calculating the expected AoII achieved by the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With Π in mind, we have the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 15 Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For 0 < τ < ∞, Π and π∆ for 0 ≤ ∆ < ω − 1 are the solution to the following system of linear equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π0 = (1 − p)π0 + p τ−1 � i=1 πi + P1,0(1) �ω−1 � i=τ πi + Π � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π1 = pπ0 + P1,1(1) �ω−1 � i=τ πi + Π � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Π = ω−1 � i=τ+1 � i � k=τ+1 Pi,tmax+k(1) � πi + tmax � i=1 � Pω,ω+i(1) � Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' τ−1 � i=0 πi + ET �ω−1 � i=τ πi + Π � = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For each 2 ≤ ∆ ≤ tmax − 1, π∆ = � � � � � � � � � � � � � (1 − p)π∆−1 + Pτ,∆(1) �ω−1 � i=τ πi + Π � ∆ − 1 < τ, ∆−1 � i=τ Pi,∆(1)πi + P∆,∆(1) �ω−1 � i=∆ πi + Π � ∆ − 1 ≥ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For each tmax ≤ ∆ ≤ ω − 1, π∆ = � � � � � � � � � (1 − p)π∆−1 ∆ − 1 < τ, ∆−1 � i=τ Pi,∆(1)πi ∆ − 1 ≥ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We delve into the definition of Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By leveraging the structural property of the threshold policy and the properties of P∆,∆′(a), we obtain the above system of linear equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The size of the system of linear equations detailed in Theorem 1 is ω + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When τ = 0, π0 = P1,0(1) ET[1 − P0,0(1) + P1,0(1)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π∆ = ∆−1 � i=0 Pi,∆(1)πi + P∆,∆(1) � 1 ET − ∆−1 � i=0 πi � , 1 ≤ ∆ ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 16 Π = tmax � i=1 � i � k=1 Pi,tmax+k(1) � πi 1 − tmax � i=1 Ptmax+1,tmax+1+i(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When τ = 1, π0 = P1,0(1) pET + P1,0(1), π1 = pP1,0(1) + pP1,1(1) pET + P1,0(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π∆ = ∆−1 � i=1 Pi,∆(1)πi + P∆,∆(1) � 1 − π0 ET − ∆−1 � i=1 πi � , 2 ≤ ∆ ≤ tmax + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Π = tmax+1 � i=2 � i � k=2 Pi,tmax+k(1) � πi 1 − tmax � i=1 Ptmax+2,tmax+2+i(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The calculations follow similar steps as detailed in the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We will calculate the expected AoII in the next subsection based on the above results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Expected AoII Let ¯∆τ be the expected AoII achieved by threshold policy τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, ¯∆τ = τ−1 � ∆=0 C(∆, 0)π∆ + ∞ � ∆=τ C(∆, 1)π∆, (10) where C(∆, a) is the expected sum of AoII during the transmission of the update caused by the operation of a at state (∆, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that C(∆, a) includes the AoII for being at state (∆, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In order to have a more intuitive understanding of the definition of C(∆, a), we use η to denote a possible path of the state during the transmission of the update and let H be the set of all possible paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, we denote by Cη and Pη the sum of AoII and the probability associated with path η, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, C(∆, a) = � η∈H PηCη.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 17 For example, we consider the case of p2 = 1, where the transmission takes 2 time slots to be delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Also, action a = 1 is taken at state (2, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, a sample path η of the state during the transmission can be the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (2, 0, −1) → (3, 1, 1) → (4, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By our definition, Cη = 2 + 3 = 5 and Pη = Pr[(3, 1, 1) | (2, 0, −1), a = 1] · Pr[(4, 0, −1) | (3, 1, 1), a = 1] for the above sample path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we calculate C(∆, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Similar to Section IV-A, we define Ct(∆, a) as the expected sum of AoII during the transmission of the update caused by action a at state (∆, 0, −1), given that the transmission takes t time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, under Assumption 1, C(∆, a) = � � � � � � � ∆ a = 0, tmax � t=1 ptCt(∆, 1) a = 1, (11) and, under Assumption 2, C(∆, a) = � � � � � � � ∆ a = 0, tmax � t=1 ptCt(∆, 1) + pt+Ctmax(∆, 1) a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (12) Hence, obtaining the expressions of Ct(∆, 1) is sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we define Ck(∆) as the expected AoII k time slots after the transmission starts at state (∆, 0, −1), given that the transmission is still in progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ct(∆, 1) is given by Ct(∆, 1) = t−1 � k=0 Ck(∆), where Ck(∆) = � � � � � � � � � � � � � k � h=1 hp(k−h)p(1 − p)h−1 ∆ = 0, k−1 � h=1 h(1 − p(k−h))p(1 − p)h−1 + (∆ + k)(1 − p)k ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The expression of Ck(∆) is obtained by analyzing the system dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 18 Next, we calculate the expected AoII achieved by the threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We start with the case of τ = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The expected AoII achieved by the threshold policy with τ = ∞ is ¯∆∞ = 1 2p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the transmitter will never initiate any transmissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the state transi- tions are straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we focus on the case where τ is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that the expected AoII is given by (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The problem arises because of the infinite sum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To overcome this, we adopt a similar approach as proposed in Section IV-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, we leverage the structural property of the threshold policy and define Σ ≜ �∞ ∆=ω C(∆, 1)π∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, equation (10) can be written as ¯∆τ = τ−1 � i=0 C(i, 0)πi + ω−1 � i=τ C(i, 1)πi + Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As we have obtained the expressions of π∆ and C(∆, a) in previous subsections, it is sufficient to obtain the expression of Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 1 and for 0 ≤ τ < ∞, Σ = tmax � t=1 � ptP t 1,1+t(1) � ω−1 � i=ω−t C(i, 1)πi � + ∆′ tΠt � 1 − tmax � t=1 � ptP t 1,1+t(1) � , where Πt = ptP t 1,1+t(1) � ω−1 � i=ω−t πi + Π � , ∆′ t = tmax � i=1 pi �t − t(1 − p)i p � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Assumption 2 and for 0 ≤ τ < ∞, Σ = tmax � t=1 �� ω−1 � i=ω−t Υ(i + t, t)C(i, 1)πi � + ∆′ tΠt � 1 − tmax � t=1 Υ(ω + t, t) , January 18, 2023 DRAFT 19 where Υ(∆, t) = ptP t ∆−t,∆(1) + pt+P t+ ∆−t,∆(1), Πt = ω−1 � i=ω−t Υ(i + t, t)πi + Υ(ω + t, t)Π, ∆′ t = tmax � i=1 pi �t − t(1 − p)i p � + pt+ �t − t(1 − p)tmax p � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We delve into the definition of Σ and repeatedly use the properties of C(∆, a) and P∆,∆′(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' OPTIMAL POLICY In this section, we find the optimal policy for M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we first prove that the optimal policy exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Existence of Optimal Policy We first introduce the infinite horizon γ-discounted cost of M, where 0 < γ < 1 is a discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the expected γ-discounted cost under policy φ is Vφ,γ(s) = Eφ � ∞ � t=0 γtC(st) | s � , (13) where st is the state of M at time slot t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define Vγ(s) ≜ infφVφ,γ(s) as the best that can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Equivalently, Vγ(s) is the value function associated with the γ-discounted version of M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Vγ(s) satisfies the corresponding Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Vγ(s) = min a∈A � C(s) + γ � s′∈S Ps,s′(a)Vγ(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Value iteration algorithm is a canonical algorithm to calculate Vγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let Vγ,ν(s) be the estimated value function at iteration ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the estimated value function is updated in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Vγ,ν+1(s) = min a∈A � C(s) + γ � s′∈S Ps,s′(a)Vγ,ν(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (14) Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The estimated value function will converge to the value function as ν → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, limν→∞ Vγ,ν(s) = Vγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 20 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to [21, Propositions 1 and 3], it is sufficient to show that Vγ(s) is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we consider the policy φ being the one that never initiate any transmissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to (13), we have Vφ,γ(s) = Eφ � ∞ � t=0 γtC(st) | s � ≤ ∞ � t=0 γt(∆ + t) = ∆ 1 − γ + γ (1 − γ)2 < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by definition, we have Vγ(s) ≤ Vγ,φ(s) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can conclude that the value iteration reported in (14) will converge to the value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the convergence of the value iteration algorithm, we can prove the following structural property of Vγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Vγ(s) is increasing in ∆ when ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that Vγ(s) can be calculated using the value iteration algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the monotonicity of Vγ(s) can be proved via mathematical induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Now, we proceed with showing the existence of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we first define the stationary policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Definition 3 (Stationary policy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A stationary policy specifies a single action in each time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' There exists a stationary policy that is optimal for M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the minimum expected AoII is independent of the initial state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We show that M verifies the two conditions given in [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the results in the theorem is guaranteed by [21, Theorem].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote by φ∗ the optimal policy for M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the next problem is how to find φ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To solve an MDP, the value iteration algorithm and the policy iteration algorithm are two of the best-known algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the value iteration algorithm, the value function V (s) is computed iteratively until convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, since the state space S is infinite, it is not feasible to compute the value function for all states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To make the calculation feasible, in Section V-B, an approximation algorithm is applied to obtain an approximated optimal policy ˆφ∗, and ˆφ∗ is proved to converge to φ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, the choice of approximation parameters can significantly affect the algorithm’s complexity and may even lead to a non-optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To avoid this problem, in January 18, 2023 DRAFT 21 Section V-C, we introduce the policy iteration algorithm and find φ∗ theoretically using the policy improvement theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We start with the value iteration algorithm in the following subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Value Iteration Algorithm In the subsection, we present the relative value iteration (RVI) algorithm that approximates φ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Direct application of RVI becomes impractical as the state space S is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we use approximating sequence method (ASM) [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we construct another MDP M(m) = (S(m), A, P(m), C) by truncating the value of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, we impose S(m) : � � � � � � � � � � � ∆ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', m}, i ∈ {−1, 0, 1}, t ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax − 1}, where m is the predetermined maximal value of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The transition probabilities from s ∈ S(m) to z ∈ S \\ S(m) are redistributed to the states s′ ∈ S(m) in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P (m) s,s′ (a) = � � � � � � � Ps,s′(a) s′ = (∆′, t′, i′) where ∆′ < m, Ps,s′(a) + � G(z,s′) Ps,z(a) s′ = (∆′, t′, i′) where ∆′ = m, where G(z, s′) = {z = (∆, t, i) : ∆ > m, t = t′, i = i′}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The action space A and the instant cost C are the same as defined in M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The sequence of optimal policies for M(m) will converge to the optimal policy for M as m → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The proof follows the same steps as those in the proof of [8, Theorem 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can apply RVI to M(m) and treat the resulting policy as an approximation of φ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The pseudocode of RVI is given in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, the choice of the approximation parameter m is crucial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A large m can add unnecessary computational complexity, while a small m may lead to a non-optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, in the following subsections, we use the policy iteration algorithm and the policy improvement theorem to find φ∗ theoretically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We start with introducing the policy iteration algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 22 Algorithm 1 Relative Value Iteration 1: procedure RVI(M(m),ϵ) 2: V0(s) ← 0 for s ∈ S(m);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ν ← 0 3: Choose sref ∈ S(m) arbitrarily 4: repeat 5: for s ∈ S(m) do 6: for a ∈ A do 7: Hs,a ← C(s) + � s′ P (m) s,s′ (a)Vν(s′) 8: Qν+1(s) ← mina{Hs,a} 9: for s ∈ S(m) do 10: Vν+1(s) ← Qν+1(s) − Qν+1(sref) 11: ν ← ν + 1 12: until maxs{|Vν(s) − Vν−1(s)|} ≤ ϵ 13: return ˆφ∗ ← argmina{Hs,a} C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Policy Iteration Algorithm The policy iteration algorithm is an iterative algorithm that iterates between the following two steps until convergence, which happens when two consecutive iterations produce equivalent policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) The first step is policy evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this step, we calculate the value function V φ(·) and the expected AoII θφ resulting from the adoption of some policy φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, the value function and the expected AoII are obtained by solving the following system of linear equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ(s) + θφ = C(s) + � s′∈S P φ s,s′V φ(s′), s ∈ S, (15) where P φ s,s′ is the state transition probability from s to s′ when policy φ is adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that (15) forms a underdetermined system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we can select any state s as a reference state and set the corresponding value function as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this way, we can obtain a unique solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 23 2) The second step is policy improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this step, we obtain a new policy φ′ by applying the V φ(·) obtained in the first step to the Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, the action suggested by φ′ at state s is determined by φ′(s) = argmin a∈A � C(s) + � s′∈S Ps,s′(a)V φ(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The pseudocode for policy iteration algorithm is given in Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With policy iteration Algorithm 2 Policy Iteration 1: procedure PI(M) 2: Choose φ′(s) ∈ A arbitrarily for all s ∈ S 3: repeat 4: φ(s) ← φ′(s) for all s ∈ S 5: (V φ(s), θφ) ← POLICYEVALUATION(M, φ(s)) 6: φ′(s) ← POLICYIMPROVEMENT(M, V φ(s)) 7: until φ′(s) = φ(s) for all s ∈ S 8: return (φ∗, θ) ← (φ(s), θφ) algorithm in mind, we can proceed with presenting the policy improvement theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 6 (Policy improvement theorem).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Suppose that we have obtained the value function resulting from the operation of a policy A and that the policy improvement step has produced a policy B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' If B is different from A, θA ≥ θB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When policy improvement step converges (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', A and B are equivalent), the converged policy is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The proof follows the steps presented in [23, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 42-43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Before finding φ∗, we first simplify the Bellman equation shown in (4) in the next subsection to make the process of finding φ∗ more concise and straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 24 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Simplifying the Bellman Equation We note that state transitions are complex and intertwined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, the direct analysis of the Bellman equation (4) is complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we will simplify the Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we leverage the fact that the action space depends on the state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More specifically, when the channel is busy (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', i ̸= −1), the feasible action is a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the transmitter’s actions at these states are fixed, which leads to the fact that for these states, the minimum operators in (4) are avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let S−1 ≜ {s = (∆, t, i) : i = −1} be the set of states at which the channel is idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, V (s) + θ = min a∈A � C(s) + � s′∈S Ps,s′(a)V (s′) � = C(s) + � s′∈S Ps,s′(0)V (s′), s ∈ S \\ S−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, for each s ∈ S−1, by repeatedly replacing V (s), where s ∈ S \\S−1, with its corresponding Bellman equation, we can obtain the Bellman equation consists only V (s) where s ∈ S−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We know that s = (∆, 0, −1) for s ∈ S−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we abbreviate V (∆, 0, −1) as V (∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we obtain the following Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V (∆) + θ = min a∈{0,1} � C(∆, a) − θ(a) + � ∆′≥0 P∆,∆′(a)V (∆′) � , ∆ ≥ 0, (16) where θ(a) = � � � 0 a = 0, (ET − 1)θ a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that ET, P∆′,∆(a), and C(∆, a) are those defined and discussion in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, it is sufficient to use (16) instead of (4) to determine the optimal action at state (∆, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Although equation (16) may seem complicated at first glance, its advantages will be fully demonstrated later in the following subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Optimal Policy via Policy Iteration Algorithm In this subsection, we find φ∗ theoretically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we first introduce two conditions that are essential to the analysis later on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Condition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The condition is the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ¯∆1 ≤ min � ¯∆0, 1 + (1 − p)σ 2 � , January 18, 2023 DRAFT 25 where, for Assumption 1, σ = tmax � t=1 pt �1 − (1 − p)t p � 1 − tmax � t=1 ppt(1 − p)t−1 , and for Assumption 2, σ = tmax � t=1 pt �1 − (1 − p)t p � + pt+ �1 − (1 − p)tmax p � 1 − �tmax � t=1 ppt(1 − p)t−1 + pt+(1 − p)tmax � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ¯∆0 and ¯∆1 are the expected AoII resulting from the adoption of the threshold policy with τ = 0 and τ = 1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Under Condition 1, the optimal policy for M is the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The value iteration algorithm detail in Section V-B provides us with a good guess on the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we theoretically prove its optimality using the policy improvement theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The general procedure for the optimality proof can be summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) Policy Evaluation: We calculate the value function resulting from the adoption of the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) Policy Improvement: We apply the value functions obtained in the previous step to Bellman equation and verify that the resulting policy remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the policy improvement theorem tells us that the resulting policy is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The complete proof can be found in Appendix M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that Condition 1 is a sufficient condition for the threshold policy with τ = 1 to be optimal, but not a necessary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the system fails to satisfy Condition 1, we can use the value iteration algorithm introduced in Section V-B to obtain a good estimate of φ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' NUMERICAL RESULTS In this section, we numerically verify Condition 1 and analyze the performance of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 26 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Verification of Condition 1 As the closed-form expressions of ¯∆0 and ¯∆1 are given in Section IV, the inequality in Condition 1 is easy to verify.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We numerically verify Condition 1 for the following systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' System adopts Assumption 1/Assumption 2 and Geometric transmission delay with success probability ps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, pt = (1 − ps)t−1ps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' System adopts Assumption 1 and the transmission delay follows the Zipf distribution with constant a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, pt = t−a �tmax i=1 i−a, 1 ≤ t ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' System adopts Assumption 1 and pt = 1 2(1{t = 1} + 1{t = tmax}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For each of the above systems, the parameters take the following values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='05 ≤ p ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='45 with step size being equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2 ≤ tmax ≤ 15 with step size being equal to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 ≤ ps ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='95 with step size being equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 ≤ a ≤ 5 with step size being equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The numerical results show that all the systems mentioned above satisfy Condition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can conclude that the corresponding optimal policy is the threshold policy with τ = 1, the performance of which is presented in the next subsection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Zipf distribution reduces to Uniform distribution when a = 0, and Geometric transmission delay reduces to deterministic transmission delay when ps = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We ignore the case of p = 0 because the dynamic source does not change state in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Similarly, we are not interested in the case of p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 because the state of the dynamic source is independent of the previous state in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Also, we exclude the case of ps = 1 because, in this case, the transmission time is deterministic and equal to 1 time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Optimal Policy Performance In this subsection, we analyze the performance of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we consider the system where the transmission delay follows a Geometric distribution with success probability ps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, we compare the performance of the optimal policy with that of the threshold policies with τ = 0 and τ = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' All the results are calculated using Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' a) The effect of p: In this case, we fix tmax = 5 and ps = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we vary p and plot the corresponding results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the figure, to better show the performance of the optimal policy, we only show parts of the results for the threshold policy with τ = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that, January 18, 2023 DRAFT 27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='45 p 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 2 Expected AoII = 0 = 1 = (a) Performance under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='45 p 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 2 Expected AoII = 0 = 1 = (b) Performance under Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 3: Illustrations of the expected AoII in the function of p and τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We set the upper limit on the transmission time tmax = 5 and the success probability in Geometric distribution ps = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' as p increases, the expected AoIIs achieved by the threshold policies with τ = 0 and τ = 1 increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This is because when p is large, the dynamic sources will be inclined to switch between states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, the state of the dynamic source is more unpredictable, leading to an increase in the achieved expected AoIIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, the expected AoII achieved by the threshold policy with τ = ∞ decreases as p increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To explain this, we first recall that, under the threshold policy with τ = ∞, the receiver’s estimate will not change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Also, when p is large, the dynamic source will switch states frequently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, the probability of a situation where the receiver’s estimate is always incorrect is small, which makes the resulting AoII small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Also, we notice that Assumption 1 and Assumption 2 lead to almost the same performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To explain this, we first note that the only difference between Assumption 1 and Assumption 2 is whether the update is delivered or discarded when the transmission lasts to the tmaxth time slot after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, under our choices of ps and tmax, the transmission time of an update rarely reaches tmax time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Even if it reaches tmax time slots, delivery or discard does not significantly impact the performance, as the receiver’s estimate can be correct or incorrect regardless of whether the update is delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, Assumption 1 and Assumption 2 yield almost the same performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' b) The effect of ps: In this case, we fix tmax = 5 and p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we vary ps and plot the corresponding results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The figure shows that the expected AoIIs achieved by the January 18, 2023 DRAFT 28 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 pS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 Expected AoII = 0 = 1 = (a) Performance under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 pS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 Expected AoII = 0 = 1 = (b) Performance under Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 4: Illustrations of the expected AoII in the function of ps and τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We set the upper limit on the transmission time tmax = 5 and the source dynamic p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' threshold policies with τ = 0 and τ = 1 decrease as ps increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The reason behind this is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As ps increases, the expected transmission time of an update decreases, meaning that updates are more likely to be delivered within the first few time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As a result, the receiver receives fresher information, and thus the expected AoII decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the performance gap between the threshold policies with τ = 1 and τ = 0 is small when ps is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To explain this, we notice that the gap exists because the updates transmitted when AoII is zero do not provide new information to the receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, the transmission will occupy the channel for a few time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, such action deprives the transmitter of the ability to send new updates for the next few time slots without providing the receiver with any new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, when ps is large, the expected transmission time of the update is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, the transmission when AoII is zero becomes less costly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the gap narrows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' c) The effect of tmax: In this case, we fix ps = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 and p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we vary tmax and plot the corresponding results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' From the figure, we can see that the effect of tmax on the performances of the policies is only noticeable when tmax is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This is because, under our choice of ps, most updates will be delivered within the first few time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, increasing tmax will not significantly affect the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 29 2 4 6 8 10 12 14 16 tmax 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 Expected AoII = 0 = 1 = (a) Performance under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2 4 6 8 10 12 14 16 tmax 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='5 Expected AoII = 0 = 1 = (b) Performance under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 5: Illustrations of the expected AoII in the function of tmax and τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We set the success probability in Geometric distribution ps = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='7 and the source dynamic p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' CONCLUSION In this paper, we investigate the problem of minimizing the Age of Incorrect Information over a channel with random delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We study a slotted-time system where a transmitter observes a dynamic source and sends updates to a remote receiver through a channel with random delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To facilitate the analysis, we consider two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The first case assumes that the transmission time has an upper bound and that the update will always be delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The second case assumes that the system automatically discards updates when transmission lasts too long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We aim to find when the transmitter should initiate transmission to minimize the AoII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we first characterize the optimization problem using the Markov decision process and calculate the expected AoII achieved by the threshold policy precisely using the Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Next, we prove that the optimal policy exists, and the relative value iteration algorithm is provided to estimate the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, with the help of the policy improvement theorem, we prove that, under Condition 1, the optimal policy is the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, we numerically verify Condition 1 for various system parameters and analyze the performance of the optimal policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' REFERENCES [1] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Uysal, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kaya, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Gross, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Codreanu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Popovski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assaad, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Liva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Munari, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Soret, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Soleymani, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Johansson, “Semantic communications in networked systems: A data significance perspective,” IEEE Network, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 233–240, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 30 [2] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Yates, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Brown, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kaul, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Modiano, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ulukus, “Age of information: An introduction and survey,” IEEE Journal on Selected Areas in Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1183–1210, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [3] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sun, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kadota, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Talak, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Modiano, “Age of information: A new metric for information freshness,” Synthesis Lectures on Communication Networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–224, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kosta, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pappas, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Angelakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', “Age of information: A new concept, metric, and tool,” Foundations and Trends® in Networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 162–259, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pappas, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Abd-Elmagid, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Saad, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Dhillon, Age of Information: Foundations and Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Cambridge University Press, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Maatouk, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kriouile, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assaad, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, “The age of incorrect information: A new performance metric for status updates,” IEEE/ACM Transactions on Networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 28, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2215–2228, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Maatouk, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assaad, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, “The age of incorrect information: an enabler of semantics-empowered communication,” IEEE Transactions on Wireless Communications, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–1, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Chen and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, “Minimizing age of incorrect information for unreliable channel with power constraint,” in 2021 IEEE Global Communications Conference (GLOBECOM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [9] ——, “Scheduling to minimize age of incorrect information with imperfect channel state information,” Entropy, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 12, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1572, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kriouile and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assaad, “Minimizing the age of incorrect information for real-time tracking of markov remote sources,” in 2021 IEEE International Symposium on Information Theory (ISIT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2978–2983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [11] ——, “Minimizing the age of incorrect information for unknown markovian source,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='09681, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [12] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Saha, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Singh Makkar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Bala Sukumaran, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Murthy, “On the relationship between mean absolute error and age of incorrect information in the estimation of a piecewise linear signal over noisy channels,” IEEE Communications Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 26, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2576–2580, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [13] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Joshi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Bhat, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Bharath, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Vaze, “Minimization of age of incorrect estimates of autoregressive markov processes,” in 2021 19th International Symposium on Modeling and Optimization in Mobile, Ad hoc, and Wireless Networks (WiOpt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [14] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kompella, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, “Age of incorrect information for remote estimation of a binary markov source,” in IEEE INFOCOM 2020-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [15] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Polyanskiy, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Uysal-Biyikoglu, “Remote estimation of the wiener process over a channel with random delay,” in 2017 IEEE International Symposium on Information Theory (ISIT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 321–325.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [16] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Polyanskiy, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Uysal, “Sampling of the wiener process for remote estimation over a channel with random delay,” IEEE Transactions on Information Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 66, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1118–1135, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [17] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ornee and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sun, “Sampling for remote estimation through queues: Age of information and beyond,” in 2019 International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [18] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Kompella, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Nguyen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Wieselthier, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ephremides, “Towards an effective age of information: Remote estimation of a markov source,” in IEEE INFOCOM 2018-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 367–372.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [19] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sutton and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Barto, Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' MIT press, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [20] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Postel, “Internet protocol,” Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 31 [21] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Sennott, “Average cost optimal stationary policies in infinite state markov decision processes with unbounded costs,” Operations Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 626–633, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [22] ——, “On computing average cost optimal policies with application to routing to parallel queues,” Mathematical methods of operations research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 45, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 45–62, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' [23] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Howard, “Dynamic programming and markov processes.” 1960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX A DETAILS OF STATE TRANSITION PROBABILITY We first elaborate on the individual transition of ∆ by dividing the transition into the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆ = 0 and the receiver’s estimates are the same at state s and s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ = 0 when the dynamic source remains in the same state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, ∆′ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆′ = � � � 0 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1 − p, 1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆ = 0 and the receiver’s estimates are different at state s and s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ = 0 when the dynamic source flips the state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, ∆′ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆′ = � � � � � 0 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p, 1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆ > 0 and the receiver’s estimates are the same at state s and s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ = ∆ + 1 when the dynamic source remains in the same state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, ∆′ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆′ = � � � � � 0 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p, ∆′ + 1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆ > 0 and the receiver’s estimates are different at state s and s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ = ∆ + 1 when the dynamic source flips the state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, ∆′ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∆′ = � � � � � 0 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1 − p, ∆ + 1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, in the following, we only state whether the receiver’s estimates are the same at state s and s′ and omit the rest of the discussion on the transition of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To make the notation clearer, we write Ps,s′(a) as P[(∆′, i′, t′) | (∆, t, i), a] in this proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we distinguish between the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 32 s = (0, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the channel is idle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the feasible action is a ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the transmitter decides not to initiate a new transmission (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', a = 0), i′ = 0 and t′ = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the receiver’s estimate remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, 0, −1) | (0, 0, −1), a = 0] = 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, 0, −1) | (0, 0, −1), a = 0] = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the transmitter decides to initiate a new transmission (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', a = 1), the update will be delivered after a random amount of time T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When T > 1, which happens with probability Pr(T > 1 | 0), the channel will be busy at the next time slot and t′ = 1 as the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the transmission happens when ∆ = 0, we know i′ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the receiver’s estimate remains the same since no new update will be delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, 1, 0) | (0, 0, −1), a = 1] = Pr(T > 1 | 0)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, 1, 0) | (0, 0, −1), a = 1] = Pr(T > 1 | 0)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When T = 1, which happens with probability 1 − Pr(T > 1 | 0), the update will be delivered at the next time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the channel will be available for a new transmission at the next time slot, which means that t′ = 0 and i′ = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the transmission started when ∆ = 0, the newly arrived update will bring no new information to the receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, 0, −1) | (0, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, 0, −1) | (0, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' s = (0, t, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the channel is busy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the feasible action is a = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the update will not arrive at the next time slot, which happens with probability Pr(T > t+1 | t), i′ = i since both the transmitting update and the receiver’s estimate remain the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Apparently, t′ = t + 1 as the transmission continues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the receiver’s estimate remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, t + 1, 0) | (0, t, 0)] = Pr(T > t + 1 | t)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, t + 1, 0) | (0, t, 0)] = Pr(T > t + 1 | t)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When the update arrives at the next time slot, which happens with probability 1 − Pr(T > t + 1 | t), t′ = 0 and i′ = −1 by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since i = 0, the newly arrived update will January 18, 2023 DRAFT 33 bring no new information to the receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, 0, −1) | (0, t, 0)] = (1 − Pr(T > t + 1 | t))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, 0, −1) | (0, t, 0)] = (1 − Pr(T > t + 1 | t))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' s = (0, t, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The analysis is very similar to the case of s = (0, t, 0) except that when the update arrives, the receiver’s estimate will flip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(0, t + 1, 1) | (0, t, 1)] = Pr(T > t + 1 | t)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, t + 1, 1) | (0, t, 1)] = Pr(T > t + 1 | t)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, 0, −1) | (0, t, 1)] = (1 − Pr(T > t + 1 | t))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(1, 0, −1) | (0, t, 1)] = (1 − Pr(T > t + 1 | t))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' s = (∆, 0, −1) where ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the analysis is very similar to the case of s = (0, 0, −1) except that, the receiver’s estimate is incorrect at state s and if the decision is made to transmit, the transmitted update is different from the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Therefore, the details are omitted here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(∆ + 1, 0, −1) | (∆, 0, −1), a = 0] = 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, 0, −1) | (∆, 0, −1), a = 0] = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(∆ + 1, 1, 1) | (∆, 0, −1), a = 1] = Pr(T > 1 | 0)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, 1, 1) | (∆, 0, −1), a = 1] = Pr(T > 1 | 0)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(∆ + 1, 0, −1) | (∆, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, 0, −1) | (∆, 0, −1), a = 1] = (1 − Pr(T > 1 | 0))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' s = (∆, t, 0) where ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The analysis is very similar to the case of s = (0, t, 0) except that the receiver’s estimate is incorrect at state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(∆ + 1, t + 1, 0) | (∆, t, 0)] = Pr(T > t + 1 | t)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, t + 1, 0) | (∆, t, 0)] = Pr(T > t + 1 | t)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(∆ + 1, 0, −1) | (∆, t, 0)] = (1 − Pr(T > t + 1 | t))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 34 Pr[(0, 0, −1) | (∆, t, 0)] = (1 − Pr(T > t + 1 | t))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' s = (∆, t, 1) where ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The analysis is very similar to the case of s = (∆, t, 0) except that the transmitted update differs from the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Pr[(∆ + 1, t + 1, 1) | (∆, t, 1)] = Pr(T > t + 1 | t)(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, t + 1, 1) | (∆, t, 1)] = Pr(T > t + 1 | t)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(∆ + 1, 0, −1) | (∆, t, 1)] = (1 − Pr(T > t + 1 | t))p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, 0, −1) | (∆, t, 1)] = (1 − Pr(T > t + 1 | t))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combing the above cases, we fully characterized the state transition probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Remark 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that the transitions that are not discussed above happen with probability zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX B PROOF OF LEMMA 1 We recall that P t ∆,∆′(1) is the probability that action a at state s = (∆, 0, −1) will lead to state s′ = (∆′, 0, −1), given that the transmission takes t time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With this in mind, we first distinguish between different values of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When ∆ = 0, the transmitted update is the same as the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate will not change due to receiving the transmitted update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, we recall that AoII will either increases by one or decreases to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we further distinguish our discussion into the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 0 happens when the receiver’s estimate is correct as a result of receiving the update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the probability of this happening is p(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t} happens when the receiver’s estimate is correct at (t−k)th time slot after the transmission, which happens with probability p(t−k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the estimate remains incorrect for the remainder of the transmission time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This happens when the source first changes state, then remains in the same state throughout the rest of the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the probability of this happening is p(1 − p)k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, ∆′ = k happens with probability p(t−k)p(1 − p)k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 35 Combining together, we have P t 0,∆′(1) = � � � � � � � � � � � p(t) ∆′ = 0, p(t−k)p(1 − p)k−1 1 ≤ ∆′ = k ≤ t, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When ∆ > 0, the transmitted update is different from the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate will flip as a result of receiving the transmitted update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, we know ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t − 1, ∆ + t}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we further distinguish between the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 0 happens in the same case as discussed in the case of ∆ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the estimate is correct with probability p(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 1 happens when the estimate is correct at (t−1)th time slot after the transmission, which happens with probability 1 − p(t−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the estimate becomes incorrect as a result of receiving the update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the estimate flips upon the arrival of the transmitted update, it happens when the source remains in the same state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the probability of this happening is 1 − p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combing together, ∆′ = 1 happens with probability (1 − p(t−1))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = k ∈ {2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t − 1} happens when the estimate is correct at (t − k)th time slot after the transmission, which happens with probability 1 − p(t−k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the estimate remains incorrect for the remainder of the transmission time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This happens when the dynamic source behaves the following way during the remaining transmission time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The dynamic source should first change state, then remain in the same state, and finally, change state again when the update arrives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This happens with probability p2(1−p)k−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = k happens with probability (1 − p(t−k))p2(1 − p)k−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = ∆ + t happens when the estimate is incorrect throughout the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the estimate will flip when the update is received, this happens when the source stays in the same state until the update arrives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = ∆ + t happens with probability p(1 − p)t−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 36 Combining together, for ∆ > 0, we have P t ∆,∆′(1) = � � � � � � � � � � � � � � � � � � � � � � � � � p(t) ∆′ = 0, (1 − p(t−1))(1 − p) ∆′ = 1, (1 − p(t−k))p2(1 − p)k−2 2 ≤ ∆′ = k ≤ t − 1, p(1 − p)t−1 ∆′ = ∆ + t, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By analyzing the above expressions, we can easily conclude that P t ∆,∆′(1) possesses the following properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,0(1) and P t ∆,∆+t(1) are both independent of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ t − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,∆′(1) = 0 when ∆′ > ∆ + t or when t − 1 < ∆′ < ∆ + t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the above properties, we can prove the second part of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The equivalent expression can be obtained easily, so the details are omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we focus on proving the properties of P∆,∆′(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' property 1: When ∆′ = 0, P∆,0(1) = �tmax t=1 ptP t ∆,0(1) for any ∆ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since P t ∆,0(1) is independent of ∆, property 1 holds in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we consider the case of 1 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ ∆′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, P∆,∆′(1) = tmax � t=∆′ ptP t ∆,∆′(1), where P t ∆,∆′(1) is independent of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P∆,∆′(1) is independent of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, property 1 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' property 2: We notice that, when ∆′ ≥ tmax, P∆,∆′(1) = pt′P t′ ∆,∆′(1) = pt′P t′ ∆,∆+t′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that P t′ ∆,∆+t′(1) is independent of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can conclude that P∆,∆′(1) depends only on t′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Thus, property 2 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' property 3: The equivalent expression in corollary indicates that the property holds when ∆′ > ∆ + tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the case of tmax − 1 < ∆′ < ∆ + 1, we have P∆,∆′(1) = pt′P t′ ∆,∆′(1), where t′ ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By definition, P∆,∆′(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, property 3 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 37 APPENDIX C PROOF OF LEMMA 2 The proof is similar to that of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We first derive the expressions of P t ∆,∆′(1) and P t+ ∆,∆′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we start with the case of ∆ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the transmitted update is the same as the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With this in mind, we distinguish between different values of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When 1 ≤ t < tmax, the update is delivered after t time slot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we further distinguish between different values of ∆′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 0 in the case where the receiver’s estimate is correct when the update is delivered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = 0 happens with probability p(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = k ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t} when the receiver’s estimate is correct at the (t − k)th time slots after the transmission occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the source flips the state and remains in the same state for the remainder of the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = k ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t} happens with probability p(t−k)p(1 − p)k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When t = tmax, the update either arrives or be discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that the update is the same as the receiver’s estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate will not change in both cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, P tmax 0,∆′ (1) = P t+ 0,∆′(1), which can be obtained by setting the t in the above case to tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, for each 1 ≤ t ≤ tmax, P t 0,∆′(1) = � � � � � � � � � � � p(t) ∆′ = 0, p(t−k)p(1 − p)k−1 1 ≤ ∆′ = k ≤ t, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ 0,∆′(1) = P tmax 0,∆′ (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we consider the case of ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that, in this case, the receiver’s estimate will flip upon receiving the update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we distinguish between different values of t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When 1 ≤ t < tmax, the update is delivered after t time slots, and the receiver’s estimate will flip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t−1, ∆+t}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we further distinguish between different values of ∆′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 0 in the case where the receiver’s estimate is correct when the update is received.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = 0 happens with probability p(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 38 – ∆′ = 1 when the receiver’s estimate is correct at (t−1)th time slot after the transmission starts and becomes incorrect when the update arrives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = 1 happens with probability (1 − p(t−1))(1 − p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = k ∈ {2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t − 1} when the receiver’s estimate is correct at (t − k)th time slot after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the source changes state and remains in the same state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, at the time slot when the update arrives, the source flips state again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = k ∈ {2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', t − 1} happens with probability (1 − p(t−k))p2(1 − p)k−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = ∆ + t when the estimate is incorrect throughout the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that the receiver’s estimate will flip when the update arrives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = ∆ + t when the source remains in the same state until the update arrives, which happens with probability p(1 − p)t−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When t = tmax and the transmitted update is delivered, the receiver’s estimate flips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax − 1, ∆ + tmax}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P tmax ∆,∆′(1) can be obtained by setting the t in the above case to tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When t = tmax and the transmitted update is discarded, the receiver’s estimate remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ∆′ ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax−1, ∆+tmax}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we further divide our discussion into the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = 0 when the receiver’s estimate is correct at the tmaxthe time slot after the transmission starts, which happens when the state of the source at the time slot the update is discarded is different from that when the transmission started.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = 0 happens with probability 1 − p(tmax).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = k ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax − 1} when the receiver’s estimate is correct at (tmax − k)th time slot after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the source changes state and remains in the same state for the remainder of the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, ∆′ = k ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', tmax − 1} happens with probability (1 − p(tmax−k))p(1 − p)k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆′ = ∆+tmax when the source remains in the same state throughout the transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining with the source dynamic, we can conclude that ∆′ = ∆ + tmax happens with probability (1 − p)tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 39 Combining together, for ∆ > 0 and each 1 ≤ t ≤ tmax, P t ∆,∆′(1) = � � � � � � � � � � � � � � � � � � � � � � � � � p(t) ∆′ = 0, (1 − p(t−1))(1 − p) ∆′ = 1, (1 − p(t−k))p2(1 − p)k−2 2 ≤ ∆′ = k ≤ t − 1, p(1 − p)t−1 ∆′ = ∆ + t, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ ∆,∆′(1) = � � � � � � � � � � � � � � � � � � � 1 − p(tmax) ∆′ = 0, (1 − p(tmax−k))p(1 − p)k−1 1 ≤ ∆′ = k ≤ tmax − 1, (1 − p)tmax ∆′ = ∆ + tmax, 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By analyzing the above expressions, we can easily conclude that P t ∆,∆′(1) and P t+ ∆,∆′(1) possess the following properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,∆+t(1) and P t+ ∆,∆+tmax(1) are independent of ∆ when ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ t − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t ∆,∆′(1) = 0 when ∆ > 0 and t − 1 < ∆′ < ∆ + t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ ∆,∆′(1) is independent of ∆ when ∆ > 0 and 0 ≤ ∆′ ≤ tmax − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P t+ ∆,∆′(1) = 0 when ∆ > 0 and tmax − 1 < ∆′ < ∆ + tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the properties above, we proceed with proving the second part of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The equivalent expression can be obtained easily by analyzing (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the details are omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we focus on proving the presented properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' property 1: We notice that, when 0 ≤ ∆′ ≤ tmax − 1 and ∆ ≥ max{1, ∆′}, P∆,∆′(1) = tmax � t=∆′ ptP t ∆,∆′(1) + pt+P t+ ∆,∆′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we divide the discussion into the following two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆ ≥ max{1, ∆′} indicates that ∆ > 0 and ∆′ < ∆ + tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P t+ ∆,∆′(1) is independent of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – ∆ ≥ max{1, ∆′} indicates that ∆ > 0 and ∆′ < ∆+t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P t ∆,∆′(1) is independent of ∆ for any feasible t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we can conclude that property 1 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 40 property 2: We notice that, when ∆′ ≥ tmax, P∆,∆′(1) = pt′P t′ ∆,∆′(1) + pt+P t+ ∆,∆′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we divide the discussion into the following two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – Since t′ = ∆′−∆, P t′ ∆,∆′(1) = P t′ ∆,∆+t′(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we know that P t′ ∆,∆′(1) is independent of ∆ > 0 when t′ > 0 and P t′ ∆,∆′(1) = 0 when t′ ≤ 0 by definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P t′ ∆,∆′(1) depends on t′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' – When ∆′ ≥ tmax and ∆′ ̸= ∆ + tmax, P t+ ∆,∆′(1) = 0 for ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Also, P t+ ∆,∆′(1) is independent of ∆ > 0 when ∆′ = ∆ + tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, P t+ ∆,∆′(1) depends only on t′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, property 2 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' property 3: When ∆′ > ∆ + tmax, the property holds apparently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' When tmax − 1 < ∆′ < ∆ + 1, P∆,∆′(1) = pt′P t′ ∆,∆′(1) + pt+P t+ ∆,∆′(1), where t′ ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by definition, P t′ ∆,∆′(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, we recall that tmax > 1, which indicates that P t+ ∆,∆′(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, property 3 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX D PROOF OF THEOREM 1 We recall that π∆ satisfies (6) and (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, plugging in the probabilities yields the following system of linear equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π0 =(1 − p)π0 + p τ−1 � i=1 πi + ∞ � i=τ Pi,0(1)πi =(1 − p)π0 + p τ−1 � i=1 πi + P1,0(1) ∞ � i=τ πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (17) π1 = pπ0 + ∞ � i=τ Pi,1(1)πi = pπ0 + P1,1(1) ∞ � i=τ πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (18) For each 2 ≤ ∆ ≤ tmax − 1, π∆ = � � � � � � � � � � � (1 − p)π∆−1 + Pτ,∆(1) ∞ � i=τ πi ∆ − 1 < τ, ∆−1 � i=τ Pi,∆(1)πi + P∆,∆(1) ∞ � i=∆ πi ∆ − 1 ≥ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (19) January 18, 2023 DRAFT 41 For each tmax ≤ ∆ ≤ ω − 1, π∆ = � � � � � � � � � (1 − p)π∆−1 ∆ − 1 < τ, ∆−1 � i=τ Pi,∆(1)πi ∆ − 1 ≥ τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For each ∆ ≥ ω, π∆ = ∆−1 � i=∆−tmax Pi,∆(1)πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (20) τ−1 � i=0 πi + ET ∞ � i=τ πi = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that we can pull the state transition probabilities in (17), (18), and (19) out of the summation due to property 1 in Lemma 1 and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we sum (20) over ∆ from ω to ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∞ � i=ω πi = ∞ � i=ω i−1 � k=i−tmax Pk,i(1)πk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (21) We delve deep into the right hand side (RHS) of (21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we expand the first summation, which yields RHS = ω−1 � k=τ+1 Pk,ω(1)πk + ω � k=τ+2 Pk,ω+1(1)πk + · · · + ω+tmax−2 � k=ω−1 Pk,ω+tmax−1(1)πk+ ω+tmax−1 � k=ω Pk,ω+tmax(1)πk + · · · Then, we rearrange the summation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' RHS =Pτ+1,ω(1)πτ+1 + 2 � k=1 Pτ+2,ω+k−1(1)πτ+2 + · · · + tmax � k=1 Pω−1,ω+k−1(1)πω−1+ tmax � k=1 Pω,ω+k(1)πω + tmax � k=1 Pω+1,ω+k+1(1)πω+1 + · · · Leveraging property 2 in Lemma 1 and Lemma 2, we have RHS = ω−1 � i=τ+1 � i � k=τ+1 Pi,tmax+k(1) � πi + tmax � i=1 � Pω,ω+i(1) � � ∞ � k=ω πk � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We define Π ≜ �∞ i=ω πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, equation (21) becomes the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Π = ω−1 � i=τ+1 � i � k=τ+1 Pi,tmax+k(1) � πi + tmax � i=1 � Pω,ω+i(1) � Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (22) Finally, replacing (20) with (22) and applying the definition of Π yield a system of linear equations with finite size as presented in the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 42 APPENDIX E PROOF OF COROLLARY 1 We start with τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ω = tmax + 1 and the system of linear equations becomes to the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π∆ = ∞ � i=0 Pi,∆(1)πi = � � � � � � � � � � � P0,0(1)π0 + P1,0(1) ∞ � i=1 πi ∆ = 0, ∆−1 � i=0 Pi,∆(1)πi + P∆,∆(1) ∞ � i=∆ πi 1 ≤ ∆ ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (23) Π = tmax � i=1 � i � k=1 Pi,tmax+k(1) � πi + tmax � i=1 Ptmax+1,tmax+1+i(1)Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (24) ET ∞ � i=0 πi = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (25) We first combine (23) and (25), which yields π∆ = � � � � � � � � � � � P0,0(1)π0 + P1,0(1) � 1 ET − π0 � ∆ = 0, ∆−1 � i=0 Pi,∆(1)πi + P∆,∆(1) � 1 ET − ∆−1 � i=0 πi � 1 ≤ ∆ ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have π0 = P1,0(1) ET[1 − P0,0(1) + P1,0(1)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to (24), we obtain Π = tmax � i=1 � i � k=1 Pi,tmax+k(1) � πi 1 − tmax � i=1 Ptmax+1,tmax+1+i(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we consider the case of τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, ω = tmax + 2 and the system of linear equations reduces to the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π0 = (1 − p)π0 + P1,0(1) ∞ � i=1 πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (26) π1 = pπ0 + P1,1(1) ∞ � i=1 πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π∆ = ∆−1 � i=1 Pi,∆(1)πi + P∆,∆(1) ∞ � i=∆ πi, 2 ≤ ∆ ≤ tmax − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 43 π∆ = ∆−1 � i=1 Pi,∆(1)πi, tmax ≤ ∆ ≤ tmax + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (27) Π = tmax+1 � i=2 � i � k=2 Pi,tmax+k(1) � πi + tmax � i=1 Ptmax+2,tmax+2+i(1)Π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (28) π0 + ET ∞ � i=1 πi = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (29) We first combine (26) and (29), which yields π0 = (1 − p)π0 + P1,0(1) �1 − π0 ET � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we have π0 = P1,0(1) pET + P1,0(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Similarly, π1 = pP1,0(1) + pP1,1(1) pET + P1,0(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For each 2 ≤ ∆ ≤ tmax − 1, π∆ = ∆−1 � i=1 Pi,∆(1)πi + P∆,∆(1) � 1 − π0 ET − ∆−1 � i=1 πi � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (30) According to the property 3 in Lemma 1 and Lemma 2, we know that P∆,∆(1) = 0 when tmax ≤ ∆ ≤ tmax + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we can combine (27) and (30), which yields π∆ = ∆−1 � i=1 Pi,∆(1)πi + P∆,∆(1) � 1 − π0 ET − ∆−1 � i=1 πi � , 2 ≤ ∆ ≤ tmax + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, according to (28), we obtain Π = tmax+1 � i=2 � i � k=2 Pi,tmax+k(1) � πi 1 − tmax � i=1 Ptmax+2,tmax+2+i(1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 44 APPENDIX F PROOF OF LEMMA 3 We recall that Ck(∆) is defined as the expected AoII k time slots after the transmission starts at state (∆, 0, −1), given that the transmission is still in progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With this in mind, we start with the case of ∆ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As AoII either increases by one or decreases to zero, we know Ck(0) ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we distinguish between the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ck(0) = 0 when the receiver’s estimate is correct k time slots after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since ∆ = 0, we can easily conclude that Ck(0) = 0 happens with probability p(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ck(0) = h, where 1 ≤ h ≤ k, happens when the receiver’s estimate is correct at the (k − h)th time slot after the transmission starts, then, the source flips the state and stays in the same state for the remaining h − 1 time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Ck(0) = h, where 1 ≤ h ≤ k, happens with probability p(k−h)p(1 − p)h−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we obtain Ck(0) = k � h=1 hp(k−h)p(1 − p)h−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we consider the case of ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In this case, the transmission starts when the receiver’s estimate is incorrect and Ck(∆) ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='k − 1, ∆ + k}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we distinguish between the following cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ck(∆) = 0 when the receiver’s estimate is correct at the kth time slot after the transmission starts, which happens with probability (1 − p(k)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ck(∆) = h, where h ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', k − 1}, happens when the receiver’s estimate is correct at the (k−h)th slot after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the source flips the state and stays in the same state for the remaining h−1 time slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, Ck(∆) = h, where h ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', k−1}, happens with probability (1 − p(k−h))p(1 − p)h−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ck(∆) = ∆ + k when the estimate at the receiver side is always wrong for k time slots after the transmission starts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since ∆ > 0 and the receiver’s estimate will not change, Ck(∆) = ∆ + k happens with probability (1 − p)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we obtain Ck(∆) = k−1 � h=1 h(1 − p(k−h))p(1 − p)h−1 + (∆ + k)(1 − p)k, ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 45 APPENDIX G PROOF OF THEOREM 2 We recall that when τ = ∞, the transmitter will never initiate any transmissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the receiver’s estimate will never change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Without loss of generality, we assume the receiver’s estimate ˆXk = 0 for all k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The first step in calculating the expected AoII achieved by the threshold policy with τ = ∞ is to calculate the stationary distribution of the induced DTMC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we know that π∆ satisfies the following equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π0 = (1 − p)π0 + p ∞ � i=1 πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (31) π1 = pπ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' π∆ = (1 − p)π∆−1, ∆ ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' ∞ � i=0 πi = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (32) Combining (31) and (32) yields π0 = (1 − p)π0 + p(1 − π0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, π0 = 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can get π1 = p 2, π∆ = (1 − p)∆−1π1 = p(1 − p)∆−1 2 , ∆ ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we have π0 = 1 2, π∆ = p(1 − p)∆−1 2 , ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the transmitter will never make any transmission attempts, the cost for being at state (∆, 0, −1) is nothing but ∆ itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the expected AoII is ¯∆∞ = ∞ � ∆=1 ∆p(1 − p)∆−1 2 = 1 2p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 46 APPENDIX H PROOF OF THEOREM 3 We recall that, for ∆ ≥ ω, π∆ satisfies π∆ = ∆−1 � i=∆−tmax Pi,∆(1)πi = tmax � i=1 Pi−tmax+∆−1,∆(1)πi−tmax+∆−1, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We first focus on the system under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We know from by Lemma 1 that P∆,∆′(1) = pt′P t′ ∆,∆′(1) where t′ = ∆′ − ∆ when ∆′ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, π∆ = tmax � i=1 ptmax+1−iP tmax+1−i i−tmax+∆−1,∆(1)πi−tmax+∆−1, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Renaming the variables yields π∆ = tmax � t=1 ptP t ∆−t,∆(1)π∆−t, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To proceed, we define, for each 1 ≤ t ≤ tmax, π∆,t ≜ ptP t ∆−t,∆(1)π∆−t, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (33) Note that �tmax t=1 π∆,t = π∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, for a given 1 ≤ t ≤ tmax, we multiple both side of (33) by C(∆ − t, 1) and sum over ∆ from ω to ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we have ∞ � i=ω C(i − t, 1)πi,t = ∞ � i=ω C(i − t, 1)ptP t i−t,i(1)πi−t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (34) We define ∆′ t ≜ C(∆, 1) − C(∆ − t, 1) where ∆ > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to (11), we have ∆′ t = tmax � i=1 pi � Ci(∆, 1) − Ci(∆ − t, 1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to Lemma 3, we have Ci(∆ − t, 1) = ∆ − t + i−1 � h=1 �h−1 � k=1 k(1 − p(h−k))p(1 − p)k−1 + (∆ − t + h)(1 − p)h � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ci(∆, 1) = ∆ + i−1 � h=1 �h−1 � k=1 k(1 − p(h−k))p(1 − p)k−1 + (∆ + h)(1 − p)h � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Subtracting the two equations yields Ci(∆, 1) − Ci(∆ − t, 1) = t + i−1 � h=1 � t(1 − p)h � = t − t(1 − p)i p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 47 Then, we have ∆′ t = tmax � i=1 pi �t − t(1 − p)i p � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that ∆′ t is independent of ∆ when ∆ > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, (34) can be rewritten as ∞ � i=ω � C(i, 1) − ∆′ t � πi,t = ∞ � i=ω−t C(i, 1)ptP t i,i+t(1)πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we define Πt ≜ �∞ i=ω πi,t and Σt ≜ �∞ i=ω C(i, 1)πi,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that P t ∆,∆+t(1) is independent of ∆ when ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we obtain ∞ � i=ω C(i, 1)πi,t − ∆′ t ∞ � i=ω πi,t = ptP t 1,1+t(1) ∞ � i=ω−t C(i, 1)πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Plugging in the definitions yields Σt − ∆′ tΠt = ptP t 1,1+t(1) � ω−1 � i=ω−t C(i, 1)πi + Σ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Summing the above equation over t from 1 to tmax yields tmax � t=1 � Σt − ∆′ tΠt � = tmax � t=1 � ptP t 1,1+t(1) � ω−1 � i=ω−t C(i, 1)πi + Σ �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Rearranging the above equation yields Σ − tmax � t=1 ∆′ tΠt = tmax � t=1 � ptP t 1,1+t(1) � ω−1 � i=ω−t C(i, 1)πi �� + tmax � t=1 � ptP t 1,1+t(1) � Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (35) Hence, the closed-form expression of Σ is Σ = tmax � t=1 � ptP t 1,1+t(1) � ω−1 � i=ω−t C(i, 1)πi � + ∆′ tΠt � 1 − tmax � t=1 � ptP t 1,1+t(1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we calculate Πt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining the definition of Πt with (33), we have Πt ≜ ∞ � i=ω πi,t = ∞ � i=ω � ptP t i−t,i(1)πi−t � = ∞ � i=ω−t � ptP t i,i+t(1)πi � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since P t ∆,∆+t(1) is independent of ∆ when ∆ > 0, we have Πt = ptP t 1,1+t(1) � ω−1 � i=ω−t πi + Π � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we recover the results for Assumptio 1 as presented in the first part of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 48 In the sequel, we focus on Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we follow similar steps as detailed above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall from Lemma 2, P∆,∆′(1) = pt′P t′ ∆,∆′(1)+pt+P t+ ∆,∆′(1) where t′ = ∆′ −∆ when ∆′ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, π∆ = tmax � i=1 � ptmax+1−iP tmax+1−i ∆−tmax+i−1,∆(1) + pt+P t+ ∆−tmax+i−1,∆(1) � π∆−tmax−1+i, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Renaming the variables yields π∆ = tmax � t=1 � ptP t ∆−t,∆(1) + pt+P t+ ∆−t,∆(1) � π∆−t = tmax � t=1 Υ(∆, t)π∆−t, ∆ ≥ ω, where Υ(∆, t) ≜ ptP t ∆−t,∆(1)+pt+P t+ ∆−t,∆(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that Υ(∆, t) is independent of ∆ when ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To proceed, we define, for each 1 ≤ t ≤ tmax, π∆,t ≜ Υ(∆, t)π∆−t, ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Note that �tmax t=1 π∆,t = π∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, for a given 1 ≤ t ≤ tmax, we have ∞ � i=ω C(i − t, 1)πi,t = ∞ � i=ω C(i − t, 1)Υ(i, t)πi−t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (36) We define ∆′ t ≜ C(∆, 1) − C(∆ − t, 1) where ∆ > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to (12), we have ∆′ t = tmax � i=1 pi � Ci(∆, 1) − Ci(∆ − t, 1) � + pt+ � Ctmax(∆, 1) − Ctmax(∆ − t, 1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By Lemma 3, we have Ci(∆ − t, 1) = ∆ − t + i−1 � h=1 �h−1 � k=1 k(1 − p(h−k))p(1 − p)k−1 + (∆ − t + h)(1 − p)h � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Ci(∆, 1) = ∆ + i−1 � h=1 �h−1 � k=1 k(1 − p(h−k))p(1 − p)k−1 + (∆ + h)(1 − p)h � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Subtracting the two equations yields Ci(∆, 1) − Ci(∆ − t, 1) = t + i−1 � h=1 � t(1 − p)h � = t − t(1 − p)i p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have ∆′ t = tmax � i=1 pi �t − t(1 − p)i p � + pt+ �t − t(1 − p)tmax p � , 1 ≤ t ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 49 We notice that ∆′ t = C(∆, 1) − C(∆ − t, 1) is independent of ∆ when ∆ > t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, equation (36) can be written as ∞ � i=ω � C(i, 1) − ∆′ t � πi,t = ∞ � i=ω−t C(i, 1)Υ(i + t, t)πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we define Πt ≜ �∞ i=ω πi,t and Σt ≜ �∞ i=ω C(i, 1)πi,t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that Υ(∆, t) is independent of ∆ when ∆ ≥ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, plugging in the definitions yields Σt − ∆′ tΠt = ω−1 � i=ω−t Υ(i + t, t)C(i, 1)πi + Υ(ω + t, t)Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Summing the above equation over t from 1 to tmax yields tmax � t=1 � Σt − ∆′ tΠt � = tmax � t=1 � ω−1 � i=ω−t Υ(i + t, t)C(i, 1)πi + Υ(ω + t, t)Σ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Rearranging the above equation yields Σ − tmax � t=1 ∆′ tΠt = tmax � t=1 � ω−1 � i=ω−t Υ(i + t, t)C(i, 1)πi � + tmax � t=1 Υ(ω + t, t)Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the closed-form expression of Σ is Σ = tmax � t=1 �� ω−1 � i=ω−t Υ(i + t, t)C(i, 1)πi � + ∆′ tΠt � 1 − tmax � t=1 Υ(ω + t, t) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we calculate Πt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we have Πt ≜ ∞ � i=ω πi,t = ∞ � i=ω Υ(i, t)πi−t = ∞ � i=ω−t Υ(i + t, t)πi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since Υ(∆, t) is independent of ∆ if ∆ ≥ ω, we have Πt = ω−1 � i=ω−t Υ(i + t, t)πi + Υ(ω + t, t)Π, 1 ≤ t ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we recover the results for the system under Assumption 2 as presented in the second half of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 50 APPENDIX I PROOF OF LEMMA 5 Leveraging Lemma 4, the result can be proved by mathematical induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To start with, we initialize Vγ,0(s) = 0 for all s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the base case (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', ν = 0) is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we assume the monotonicity holds at iteration ν, and check whether the monotonicity still holds at iteration ν + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall that the estimated value function Vγ,ν+1(s) is updated using (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the structural properties are embedded in the state transition probability Ps,s′(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Using the state transition probabilities in Appendix A, equation (14) for the state with ∆ > 0 can be written as Vγ,ν+1(∆, t, i) = min a∈{0,1} � ∆ + γ � ∆′,t′,i′ Pr[(∆′, t′, i′) | (∆, t, i), a]Vγ,ν(∆′, t′, i′) � = min a∈{0,1} � ∆ + γ � t′,i′ � Pr[(∆ + 1, t′, i′) | (∆, t, i), a]Vγ,ν(∆ + 1, t′, i′)+ Pr[(0, t′, i′) | (∆, t, i), a]Vγ,ν(0, t′, i′) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, for any ∆1 > 0 and ∆2 > 0, we have Pr[(∆1 + 1, t′, i′) | (∆1, t, i), a] = Pr[(∆2 + 1, t′, i′) | (∆2, t, i), a].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Pr[(0, t′, i′) | (∆1, t, i), a] = Pr[(0, t′, i′) | (∆2, t, i), a].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V a γ,ν+1(∆, t, i) be the resulting Vγ,ν+1(∆, t, i) when action a is chosen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have V a γ,ν+1(∆ + 1, t, i) − V a γ,ν+1(∆, t, i) = 1 + γ � t′,i′ � Pr[(∆ + 1, t′, i′) | (∆, t, i), a] � Vγ,ν(∆ + 2, t′, i′) − Vγ,ν(∆ + 1, t′, i′) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining with the assumption for iteration ν, we can easily conclude that V a γ,ν+1(∆+1, t, i) ≥ V a γ,ν+1(∆, t, i) when ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by mathematical induction, we can conclude that Lemma 5 is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX J PROOF OF THEOREM 4 We first define hγ(s) ≜ Vγ(s)−Vγ(sref) as the relative value function and choose the reference state sref = (0, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For simplicity, we abbreviate the reference state sref as 0 for the remainder of this proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we show that M verifies the two conditions given in [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As a result, the existence of the optimal policy is guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 51 1) There exists a non-negative N such that −N ≤ hγ(s) for all s and γ: Leveraging Lemma 5, we can easily conclude that hγ(s) is also increasing in ∆ when ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we consider the policy φ being the threshold policy with τ = 0 as defined in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we know that policy φ induces an irreducible ergodic Markov chain and the expected cost is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let cs,s′(φ) be the expected cost of a first passage from s ∈ S to s′ ∈ S when policy φ is adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by [21, Proposition 4], we know that cs,0(φ) is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, hγ(s) ≤ cs,0(φ) as is given in the proof of [21, Proposition 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we have Vγ(0) − Vγ(s) ≤ c0,s(φ) and Vγ(0) − Vγ(s) = −hγ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we have hγ(s) ≥ −c0,s(φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining with the monotonicity proved in Lemma 5, we can choose N = maxs∈G{c0,s(φ)}, where G = {s = (∆, t, i) : ∆ ∈ {0, 1}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' This condition indicates that [21, Assumption 2] holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) M has a stationary policy φ inducing an irreducible, ergodic Markov chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the resulting expected cost is finite: We consider the policy φ being the threshold policy with τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to Section IV, it induces an irreducible, ergodic Markov chain and the resulting expected cost is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to [21, Proposition 5], we can conclude that [21, Assumptions 1 and 3] hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' As the two conditions are verified, the existence of the optimal policy is guaranteed by [21, Theorem].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, the minimum expected cost is independent of the initial state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX K PROOF OF THEOREM 5 We inherit the definitions and notations introduced in Section V-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We further define vγ,n(·) as the minimum expected γ-discounted cost for operating the system from time 0 to time n − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' It is known that limn→∞ vγ,n(s) = Vγ(s), for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We also define the expected cost under policy φ as Jφ(s) = lim sup K→∞ 1 K Eφ �K−1 � k=0 C(st) | s � , and J(s) ≜ infφ Jφ(s) is the best that can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V (m) φ,γ (s), V (m) γ (s), v(m) γ,n (s), J(m) φ (s), J(m)(s), and h(m) γ (s) are defined analogously for M(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With the above definitions in mind, we show that our system verifies the two assumptions given in [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Assumption 1: There exists a non-negative (finite) constant L, a non-negative (finite) function M(·) on S, and constants m0 and γ0 ∈ [0, 1), such that −L ≤ h(m) γ (s) ≤ M(s), for s ∈ S(m), January 18, 2023 DRAFT 52 m ≥ m0, and γ ∈ (γ0, 1): L can be chosen in the same way as presented in the proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' More precisely, L = maxs∈G{h(m) γ (s)}, where G = {s = (∆, t, i) : ∆ ∈ {0, 1}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let cs,0(φ) be the expected cost of a first passage from s ∈ S to the reference state 0 when policy φ is adopted and c(m) x,0 (φ) is defined analogously for M(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we consider the policy φ being the threshold policy with τ = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We recall from Section V that the policy φ induces an irreducible ergodic Markov chain, and the expected cost is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, h(m) γ (s) ≤ c(m) s,0 (φ) by [21, Proposition 5] and cx,0(φ) is finite by [21, Proposition 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We also know from the proof of [22, Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='3] that cs,0(φ) satisfies the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' cs,0(φ) = C(s) + � s′∈S−{0} P φ ss′cs′,0(φ), (37) where P φ ss′ is the state transition probability from state s to s′ under policy φ for M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' P (m),φ ss′ is defined analogously for M(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We can verify in a similar way to the proof of Lemma 5 that cs,0(φ) is increasing in ∆ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The proof is omitted here for the sake of space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, � y∈S(m) −1 P (m),φ sy cy,0(φ) = � y∈S(m) −1 P φ sycy,0(φ) + � y∈S(m) −1 � � � z∈S\\S(m) P φ szqz(y) � � cy,0(φ) = � y∈S(m) −1 P φ sycy,0(φ) + � z∈S\\S(m) P φ sz � � � � y∈S(m) −1 qz(y)cy,0(φ) � � � ≤ � y∈S(m) −1 P φ sycy,0(φ) + � z∈S\\S(m) P φ szcz,0(φ) = � y∈S−{0} P φ sycy,0(φ), (38) where S(m) −1 = S(m) − {0} and qs′(s) = 1{t′ = t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' i′ = i}, which is an indicator function with value 1 when the transitions to state s′ are redirected to state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, qs′(s) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, � s∈S(m) −1 qs′(s) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Applying (38) to (37) yields cs,0(φ) ≥ C(s) + � y∈S(m)−{0} P (m),φ sy cy,0(φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Bearing in mind that c(m) s,0 (φ) satisfies the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' c(m) s,0 (φ) = C(s) + � y∈S(m)−{0} P (m),φ sy c(m) y,0 (φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we can conclude that c(m) s,0 (φ) ≤ cs,0(φ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can choose M(s) = cs,0(φ) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 53 Assumption 2: lim supm→∞ J(m) ≜ J∗ < ∞ and J∗ ≤ J(s) for all s ∈ S: We first show that [22, Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='1] is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since we redistribute the transitions in a way such that, for each s′ ∈ S − S(m), � y∈S(m) qs′(y)vγ,n(y) = vγ,n(s), where s = (m, t′, i′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, We only need to verify that, for each s′ ∈ S − S(m) and s = (m, t′, i′), vγ,n(s) ≤ vγ,n(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' (39) To this end, we notice that vγ,n(s) satisfies the following inductive form [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' vγ,n+1(s) = min a � C(s) + γ � s′∈S Ps,s′(a)vγ,n(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' By following similar steps to those in the proof of Lemma 5, we can prove the monotonicity of vγ,n(s) for ∆ > 0 and n ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The proof is omitted for the sake of space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, (39) is true since ∆′ > m > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Apparently, J(s) is finite for s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to [22, Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2], assumption 2 is true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, by [22, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='2], we know There exists an average cost optimal stationary policy for M(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Any limit point of the sequence of optimal policies for M(m) is optimal for M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX L PROOF OF THEOREM 6 The proof is based on the results presented in [23, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 42-43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we consider a generic MDP M = (S, A, P, C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let C(s, A) be the instant cost for being at state s ∈ S under policy A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We also define P A s,s′ as the probability that applying policy A at state s will lead to state s′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Finally, V A(s) is defined as the value function resulting from the operation of policy A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since B is chosen over A, we have C(s, B) + � s′∈S P B s,s′V A(s′) ≤ C(s, A) + � s′∈S P A s,s′V A(s′), s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we define γs ≜ C(s, B) + � s′∈S P B s,s′V A(s′) − C(s, A) − � s′∈S P A s,s′V A(s′) ≤ 0, s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 54 Meanwhile, both policies satisfy their own Bellman equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V A(s) + θA = C(s, A) + � s′∈S P A s,s′V A(s′), s ∈ S, V B(s) + θB = C(s, B) + � s′∈S P B s,s′V B(s′), s ∈ S, where θA and θB are the expected costs resulting from the operation of policy A and policy B, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, subtracting the two expressions and bringing in the expression for γs yield V B(s) − V A(s) + θB − θA = γs + � s′∈S P B s,s′(V B(s′) − V A(s′)), s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V ∆(s) ≜ V B(s) − V A(s) and θ∆ ≜ θB − θA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have V ∆(s) + θ∆ = γs + � s′∈S P B s,s′V ∆(s′), s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We know that θ∆ = � s∈S πB s γs, where πB s is the steady-state probability of state s under policy B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since πB s is non-negative and γs is non-positive, we can conclude that θ∆ ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, θB ≤ θA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we prove that the resulting policy is optimal when the policy improvement step con- verges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we prove this by contradiction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We assume there exists two policies A and B such that θB < θA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, the policy improvement step has converged to policy A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Since the policy has converged, we know γs ≥ 0 for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, θ∆ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to the definition of θ∆, we have θB ≥ θA, which contradicts the assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, superior policies cannot go undiscovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we can conclude that the resulting policy is optimal when the policy improvement step converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' APPENDIX M PROOF OF THEOREM 7 The general procedure for the optimality proof can be summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 1) Policy Evaluation: We calculate the value function resulting from the adoption of the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' 2) Policy Improvement: We apply the value functions obtained in the previous step to Bellman equation and verify that the resulting policy remains the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the following, we elaborate on these two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 55 a) Policy Evaluation: We first calculate the value function and the expected AoII under the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For simplicity of notation, we denote the policy as φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V φ(∆) be the value function of state (∆, 0, −1) resulting from the operation of policy φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, combining (16) with the expression of P∆,∆′(a) in Lemma 1 and Lemma 2, V φ(∆) satisfies the following system of linear equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ(0) = −θφ + pV φ(1) + (1 − p)V φ(0), (40) for Assumption 1, V φ(∆) = C(∆, 1) − ETθφ + tmax � t=1 � pt � t−1 � k=0 P t ∆,k(1)V φ(k) + P t ∆,∆+t(1)V φ(∆ + t) �� , ∆ ≥ 1, for Assumption 2, V φ(∆) = C(∆, 1)−ETθφ + tmax � t=1 � pt � t−1 � k=0 P t ∆,k(1)V φ(k) + P t ∆,∆+t(1)V φ(∆ + t) �� + pt+ �tmax−1 � k=0 P t+ ∆,k(1)V φ(k) + P t+ ∆,∆+tmax(1)V φ(∆ + tmax) � , ∆ ≥ 1, where θφ is the expected AoII resulting from the adoption of φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' It is difficult to solve the above system of linear equations directly for the exact solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' However, as we will see later, some structural properties of the value function are sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' These properties are summarized in the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ(∆) satisfies the following equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ(1) − V φ(0) = θφ p , V φ(∆ + 1) − V φ(∆) = σ, ∆ ≥ 1, where for Assumption 1, σ = tmax � t=1 pt �1 − (1 − p)t p � 1 − tmax � t=1 ppt(1 − p)t−1 , and, for Assumption 2, σ = tmax � t=1 pt �1 − (1 − p)t p � + pt+ �1 − (1 − p)tmax p � 1 − �tmax � t=1 ppt(1 − p)t−1 + pt+(1 − p)tmax � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 56 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' First of all, from (40), we have θφ = p(V φ(1) − V φ(0)) ⇒ V φ(1) − V φ(0) = θφ p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we show that V φ(∆ + 1) − V φ(∆) is constant for ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We start with Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to Theorem 4, the optimal policy exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the iterative policy evaluation algorithm [19, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='74] can be used to solve the system of linear equations for V φ(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Let V φ ν (s) be the estimated value function at iteration ν of the iterative policy evaluation algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Without loss of generality, we assume V φ 0 (∆) = 0 for all ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the value function is updated in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ ν+1(∆) = C(∆, 1)−ETθφ+ tmax � t=1 � pt � t−1 � k=0 P t ∆,k(1)V φ ν (k) + P t ∆,∆+t(1)V φ ν (∆ + t) �� , ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we have limν→∞ V φ ν (∆) = V φ(∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we can prove the desired results using mathe- matical induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The base case ν = 0 is true by initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we assume V φ ν (∆ + 1) − V φ ν (∆) = σν where σν is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we will exam whether V φ ν+1(∆ + 1) − V φ ν+1(∆) is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the properties in Lemma 1, we have V φ ν+1(∆ + 1) − V φ ν+1(∆) =C(∆ + 1, 1) − ETθφ + tmax � t=1 � pt � t−1 � k=0 P t ∆+1,k(1)V φ ν (k) + P t ∆+1,∆+1+t(1)V φ ν (∆ + t + 1) �� − C(∆, 1) + ETθφ − tmax � t=1 � pt � t−1 � k=0 P t ∆,k(1)V φ ν (k) + P t ∆,∆+t(1)V φ ν (∆ + t) �� =C(∆ + 1, 1) − C(∆, 1) + tmax � t=1 � ptP t ∆,∆+t(1)σν � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to Lemma 3, we have C(∆ + 1, 1) − C(∆, 1) = tmax � t=1 � Ct(∆ + 1, 1) − Ct(∆, 1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' In the case of ∆ ≥ 1, we have Ct(∆ + 1, 1) − Ct(∆, 1) = 1 + t−1 � k=1 � (k + ∆ + 1)(1 − p)k − (k + ∆)(1 − p)k � = 1 − (1 − p)t p , 1 ≤ t ≤ tmax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we obtain C(∆ + 1, 1) − C(∆, 1) = tmax � t=1 � pt 1 − (1 − p)t p � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 57 Hence, we can conclude that V φ ν+1(∆ + 1) − V φ ν+1(∆) is independent of ∆ when ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by mathematical induction, V φ(∆) − V φ(∆ + 1) is independent of ∆ when ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote by σ the constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, σ satisfies the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' σ = V φ(∆) − V φ(∆ + 1) = tmax � t=1 �pt − pt(1 − p)t p + ptp(1 − p)t−1σ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' After some algebraic manipulations, we obtain σ = tmax � t=1 pt �1 − (1 − p)t p � 1 − tmax � t=1 ppt(1 − p)t−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we show that V φ(∆+1)−V φ(∆) is independent of ∆ ≥ 1 under Assumption 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Following the same steps, we can prove the desired results by mathematical induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The base case ν = 0 is true by initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we assume V φ ν (∆ + 1) − V φ ν (∆) = σν where σν is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' The estimated value function is updated in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' V φ ν+1(∆) = C(∆, 1) − ETθφ + tmax � t=1 � pt � t−1 � k=0 P t ∆,k(1)V φ ν (k) + P t ∆,∆+t(1)V φ ν (∆ + t) �� + pt+ �tmax−1 � k=0 P t+ ∆,k(1)V φ ν (k) + P t+ ∆,∆+tmax(1)V φ ν (∆ + tmax) � , ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we exam whether V φ ν+1(∆ + 1) − V φ ν+1(∆) is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the properties in Lemma 2, we have V φ ν+1(∆ + 1) − V φ ν+1(∆) = C(∆ + 1, 1) − C(∆, 1) + tmax � t=1 ptP t ∆,∆+t(1)σφ ν + pt+P t+ ∆,∆+tmax(1)σφ ν .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Moreover, according to the expressions in Lemma 2, we obtain tmax � t=1 ptP t ∆,∆+t(1) + pt+P t+ ∆,∆+tmax(1) = tmax � t=1 ptp(1 − p)t−1 + pt+(1 − p)tmax, which is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Leveraging the expression of C(∆, 1) in Lemma 3, we obtain C(∆, 1) − C(∆ − 1, 1) = tmax � t=1 pt �1 − (1 − p)t p � + pt+ �1 − (1 − p)tmax p � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that C(∆, 1) − C(∆ − 1, 1) is also independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, we can conclude that V φ ν+1(∆+1)−V φ ν+1(∆) is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, by mathematical induction, January 18, 2023 DRAFT 58 V φ(∆ + 1) − V φ(∆) is independent of ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We denote the constant by σ, which satisfies the following equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' σ = tmax � t=1 pt �1 − (1 − p)t p � + pt+ �1 − (1 − p)tmax p � + �tmax � t=1 ptp(1 − p)t−1 + pt+(1 − p)tmax � σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' After some algebraic manipulations, we obtain σ = tmax � t=1 pt �1 − (1 − p)t p � + pt+ �1 − (1 − p)tmax p � 1 − �tmax � t=1 ptp(1 − p)t−1 + pt+(1 − p)tmax � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' With Lemma 6 in mind, we can continue to the next step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' b) Policy Improvement: Here, we show that the optimal policy resulting from V φ(∆) and θφ is threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To this end, we define δV φ(∆) ≜ V φ,0(∆) − V φ,1(∆), where V φ,a(∆) is the value function resulting from taking action a at state (∆, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, the optimal action at state (∆, 0, −1) is a = 1 if δV φ(∆) ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Otherwise, a = 0 is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, we investigate the expression of δV φ(∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We first notice that, for ∆ ≥ 1, V φ(∆) = V φ,1(∆).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, using Lemma 6, we obtain δV φ(∆) =∆ − θφ + (1 − p)V φ(∆ + 1) + pV (0) − V φ,1(∆) =∆ − θφ + (1 − p)V φ(∆ + 1) + pV (0) − V φ(∆) =∆ − θφ + (1 − p)(V φ(∆ + 1) − V φ(∆)) + p(V φ(0) − V φ(∆)) =∆ − 2θφ + [(1 − p) − p(∆ − 1)]σ, where ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that δV φ(∆ + 1) − δV φ(∆) = 1 − pσ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT 59 For Assumption 1, plugging in the expression of σ yields 1 − pσ =1 − tmax � t=1 (pt − pt(1 − p)t) 1 − tmax � t=1 ptp(1 − p)t−1 = 1 − tmax � t=1 ptp(1 − p)t−1 − tmax � t=1 (pt − pt(1 − p)t) 1 − tmax � t=1 ptp(1 − p)t−1 = (1 − 2p) tmax � t=1 pt(1 − p)t−1 1 − tmax � t=1 ptp(1 − p)t−1 ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' For Assumption 2, we have 1 − pσ =1 − tmax � t=1 pt(1 − (1 − p)t) + pt+(1 − (1 − p)tmax) 1 − �tmax � t=1 ptp(1 − p)t−1 + pt+(1 − p)tmax � ≥1 − tmax � t=1 pt(1 − (1 − p)t) + pt+(1 − (1 − p)tmax) 1 − �tmax � t=1 pt(1 − p)t + pt+(1 − p)tmax � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, when ∆ ≥ 1, δV φ(∆ + 1) ≥ δV φ(∆) for both assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' We notice that δV φ(1) = 1 − 2θφ + (1 − p)σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' According to Condition 1, θφ = ¯∆1 ≤ 1+(1−p)σ 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, we have δV φ(1) = 1 − 2 ¯∆1 + (1 − p)σ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining together, we have δV φ(∆) ≥ δV φ(1) ≥ 0, ∆ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the optimal action at state (∆, 0, −1) where ∆ ≥ 1 is to initiate the transmission (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', a = 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Now, the only missing part is the action at state (0, 0, −1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' To determine the action, we recall from Theorem 6 that the new policy will always be no worse than the old one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Meanwhile, by Condition 1, ¯∆1 ≤ ¯∆0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Hence, the optimal action at state (0, 0, −1) is to stay January 18, 2023 DRAFT 60 idle (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=', a = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Combining with the optimal actions at other states, we can conclude that the policy improvement step yields the threshold policy with τ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Consequently, the policy iteration algorithm converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' Then, according to Theorem 6, the threshold policy with τ = 1 is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} +page_content=' January 18, 2023 DRAFT' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5NE6T4oBgHgl3EQflhFb/content/2301.06150v1.pdf'} diff --git a/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/2301.00751v1.pdf.txt b/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/2301.00751v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b33b6990c53d085fd39392c4050cf1dc819cdcaa --- /dev/null +++ b/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/2301.00751v1.pdf.txt @@ -0,0 +1,4198 @@ +arXiv:2301.00751v1 [math.AP] 2 Jan 2023 +FINITE ENERGY WELL-POSEDNESS FOR NONLINEAR +SCHR¨ODINGER EQUATIONS WITH NON-VANISHING +CONDITIONS AT INFINITY +PAOLO ANTONELLI, LARS ERIC HIENTZSCH, AND PIERANGELO MARCATI +Abstract. The Cauchy-Problem for 2D and 3D nonlinear Schr¨odinger equa- +tions with non-vanishing conditions at infinity is investigated. +Local well- +posedness in the energy space for energy-subcritical nonlinearities merely sat- +isfying Kato-type assumptions is proven. +Our result provides the analogue +of the well-established local H1-theory for solutions vanishing at infinity, no +further regularity assumptions on the nonlinearity are given. +Global well- +posedness is shown for defocusing nonlinearities provided that the nonlinear +potential is non-negative. In addition, we also introduce global well-posedness +in 3D for a class of nonlinearities for which the Hamiltonian energy fails to be +sign-definite such as e.g. competing focusing-defocusing nonlinearities. +1. Introduction +This paper is devoted to the study of the Cauchy theory for a class of nonlinear +Schr¨odinger equations posed on Rd with d = 2, 3, namely +(1.1) +i∂tψ = −1 +2∆ψ + f(|ψ|2)ψ, +equipped with non-trivial boundary conditions at infinity, i.e. +(1.2) +|ψ(x)|2 → ρ0 +as +|x| → ∞, +and where the nonlinearity satisfies f(ρ0) = 0 together with the Assumption 1.1 +of Kato type stated below. The Hamiltonian (coinciding with the total energy, in +many relevant physical contexts) associated to (1.1) reads +(1.3) +H(ψ) = +� +Rd +1 +2|∇ψ|2 + F(|ψ|2)dx, +with +F(ρ) = +� ρ +ρ0 +f(r)dr. +The finite energy assumption encodes far-field behavior, the study of which is mo- +tivated by a variety of physical applications. The aim of this paper is to provide +a well-posedness theory for (1.1) with energy-subcritical nonlinearities f, under +Kato-type [38] regularity assumptions, in a suitable energy space incorporating the +far-field (1.2) condition. Regarding the 3D-energy critical problem, we show that +global well-posedness is easily achieved relying on the existing literature [43, 17, 59] +combined with our analysis. +Without loss of generality we assume the following, (1.2) to hold for ρ0 = 1 as +the general case reduces to the former by a suitable scaling of ψ. +Date: January 3, 2023. +2020 Mathematics Subject Classification. Primary: 35Q55; Secondary: 35B30, 37L50. +Key words and phrases. nonlinear Schr¨odinger equation, Gross-Pitaevskii, well-posedness, non- +vanishing conditions at infinity. +1 + +2 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +The most prominent example for (1.1), with far-field (1.2), is the Gross-Pitaevskii +GP equation for which f(ρ) = ρ − 1. With this choice for f, the system (1.1) arises +in the description of Bose-Einstein condensates (BEC) [30, 55, 27, 56], as a model +for superfluidity in Helium II close to the λ-point [26, 55] and for quantum vortices +[6, 55]. +Beyond that, system (1.1) with non-trivial far-field and general nonlinearities f +is investigated in the theory of (BEC), superconductivity and nonlinear optics. For +instance, competing (focusing-defocusing) see e.g. (1.16), saturating or exponential +nonlinearities emerge as models in nonlinear optics [5, 45, 48, 54]. Further physically +relevant models are listed in Example 1.8 below. +The mathematical analysis of (1.1), with far-field behavior (1.2), differs signifi- +cantly from the usual H1-theory for NLS equations with trivial far-field, due to the +non-integrability of the finite energy wave-functions, which may exhibit non-trivial +oscillations at spatial infinity, in particular for d = 2. +Opposite to the defocusing nonlinear Schr¨odinger equation with vanishing con- +ditions at infinity for which scattering is known [25], system (1.1) with defocusing +nonlinearity and equipped with (1.2) admits a large variety of special solutions. +Concerning GP, the existence of sub-sonic traveling waves is known for d = 2 [9, 7] +and d = 3 [9, 8, 14]. Non-existence in the super-sonic regime is proven in [28]. +While traveling waves exist for arbitrarily small energy for d = 2, non-existence of +traveling waves with small energy for d = 3 is due to [7], see also [19] for d ≥ 3. +For general defocusing nonlinearities, including the nonlinearities considered in +Assumption 1.2 below, the existence of sub-sonic traveling waves is introduced in +[51, 16]. +Non-existence in the super-sonic regime is shown in [50]. +For d = 2, +traveling waves exist for any and in particular arbitrarily small energy ruling out +scattering, while for d = 3 there is an energy threshold below of which no traveling +waves exist. +The stability of multi-dimensional traveling waves is addressed in +[15, 49], stationary bubbles and their stability in [18]. The GP equation admits +vortex solutions of infinite energy, see [55, 10] and [60, 29] as well as references +therein for stability properties. +Regarding large time behavior, the existence of global dispersive solutions and +small data scattering for the 3D and 4D-GP equation has been investigated in a +series of papers [32, 33, 34, 31]. In [41, 42], the authors consider the final state +problem for the 3D defocusing cubic-quintic equation which is energy critical. For +general nonlinearities f, the respective problems remain open. +To give a short overview of previous well-posedness results, we mention that local +existence of solutions to the GP equation in Zhidkov spaces has been introduced +in [61] for d = 1, see also [63], and [20] for the multi-dimensional case. While the +energy space for GP for d = 1 coincides with the set of functions in the Zhidkov +space such that |ψ|2 − 1 ∈ L2(R), this identification does not hold true in the +multi-dimensional case, see [22] and Section 2 below. In [9], the authors show that +the GP equation is well-posed in 1 + H1(Rd) for d = 2, 3. Global well-posedness +in 1 + Hs(R3) with s ∈ (5/6, 1) is proven in [53]. However, the space 1 + H1(Rd) +is strictly smaller than E(Rd). There exist traveling waves for the GP eq. in the +energy space that do not belong to 1+L2(Rd), see [28]. Global well-posedness in the +energy space for the multi-dimensional GP eq. has been introduced in the seminal +paper [22]. One of the major novelties of [22] consists in the precise characterization +of the energy space as complete metric space and the action of the free propagator + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +3 +on the energy space. A more general class of defocusing and energy-subcritical +C3-nonlinearities has been considered in [21] with subsequent improvement to C2- +nonlinearities [52]. In [21, 52], the authors crucially rely on a smooth decomposition +of wave-functions in the energy space. The authors show global well-posedness in +affine spaces determined by this decomposition which requires the aforementioned +regularity assumptions and precise growth conditions for f. The result in the affine +spaces then implies well-posedness in the energy space. +Our purpose is to prove local well-posedness assuming merely Kato-type regular- +ity assumptions [38] under which local well-posedness is also known for (1.1) with +vanishing conditions at infinity, see e.g. the monograph [13, Chapter 4]. We com- +plement the local analysis by global results under suitable additional assumptions +on the nonlinearity. +In the paper [11], the authors prove global existence of unique mild solutions to +(1.1) with a logarithmic nonlinearity. +Let us point out that our well-posedness result will also be useful in the study of +a class of quantum hydrodynamic (QHD) systems with non-trivial far-field [1], see +also [3, 35] for some previous results in this direction. The analysis of the Cauchy +problem for QHD systems with non-zero conditions at infinity is pivotal to initiate +a rigorous study of some relevant physical phenomena described by quantum fluid +models, see for instance [6, 27]. +1.1. Assumptions and Main results. +Our main assumptions on the nonlin- +earity f are the following. +Assumption 1.1. Let f be a real-valued function satisfying the following Kato-type +assumptions, namely +(K1) f ∈ C([0, ∞)) ∩ C1((0, ∞)) such that f(1) = 0, +(K2) the nonlinearity is energy-subcritical, namely there exists α > 0, with α < ∞ +for d = 2 and α < 2 for d = 3, such that +|f(ρ)|, |ρf ′(ρ)| ≤ C(1 + ρα) +for all ρ ≥ 0. +The assumptions (K1), (K2) are commonly referred to as Kato-type assump- +tions, see [38, 39] and also [13, Chapter 4]. For trivial far-field behavior, namely +integrable wave-functions ψ, these assumptions correspond to the state of the art for +the H1-well-posedness for energy-subcritical nonlinearities f, see [13] and references +therein for a detailed overview of the theory. +In order to infer global results, we also require the nonlinearity to be defocusing +in the following sense. +Assumption 1.2. Let f be as in Assumption 1.1. Moreover, assume f ′(1) > 0. +Assuming the nonlinearity f to be defocusing yields that F achieves a local +minimum for the constant solution |ψ|2 = 1. In nonlinear optics, this assumption is +made in physical literature in order to ensure modulational stability of the constant +equilibrium solution, i.e. the continuous wave background [44, 54]. Due to the non- +trivial farfield behavior, inferring global results turns out to be more intricate than +in the respective integrable case which leads to additional assumptions, see Theorem +1.5 and Theorem 1.6 below. +The energy-subcritical power-type nonlinearities constitute an example of non- +linearities that satisfy Assumption 1.1 but in general not covered by [21, 22, 52]. + +4 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Example 1.3. The energy-subcritical power-type nonlinearities read +(1.4) +f(|ψ|2) = λ(|ψ|2α − 1), +with +λ = ±1 and +� +α > 0 +d = 2, +0 < α < 2 +d = 3. +These nonlinearities being included in Assumption 1.1 merely satisfy f ∈ C0,α([0, ∞)). +Previous results require λ = +1 and α = 1 [22], f ∈ C3([0, ∞)) [21], f ∈ C2([0, ∞)) +[52]. The corresponding nonlinear potential reads +F(|ψ|2) = +� |ψ|2 +1 +f(r)dr = +λ +α(α + 1) +� +|ψ|2(α+1) − 1 − (α + 1)(|ψ|2 − 1) +� +. +For λ = 1, we note that F : [0, ∞) → R is non-negative, convex and with global +minimum achieved by |ψ|2 = 1. For λ = α = 1, system (1.1) with nonlinearity +(1.4) corresponds to the GP-equation +(1.5) +i∂tψ = −1 +2∆ψ + (|ψ|2 − 1)ψ, +for which the associated Hamiltonian energy H(ψ) becomes the well-known Ginzburg- +Landau energy functional +(1.6) +EGL(ψ) := H(ψ) = +� +Rd +1 +2|∇ψ|2 + 1 +2(|ψ|2 − 1)2dx. +Global well-posedness of (1.5) in the energy space has been established in [22]. +More precisely, equation (1.5) is studied in [22] in the space of states where the +associated Hamiltonian is finite, namely +(1.7) +EGL = {ψ ∈ L1 +loc(Rd) : H(ψ) < +∞} += {ψ ∈ L1 +loc(Rd) : ∇ψ ∈ L2(Rd), |ψ|2 − 1 ∈ L2(Rd)}. +In the present paper, we define the energy space in the spirit of [62, 63, 16] as +(1.8) +E(Rd) = {ψ ∈ L1 +loc(Rd) : E(ψ) < ∞} +with +(1.9) +E(ψ) = +� +Rd |∇ψ|2 + ||ψ| − 1|2 dx. +It is straightforward to see that E ⊂ EGL. However, as it will be clear later, see +Lemmata 2.6 and 2.8, the two spaces E and EGL turn out to be equivalent. Working +in E rather than EGL is more convenient in several aspects when dealing with a +general class of nonlinearities f satisfying Assumption 1.1. +Wave functions in E(Rd) may exhibit oscillations at spatial infinity due to the +non-vanishing far-field behavior, especially for d = 2. Since ψ /∈ Lp(Rd) for any +p ≥ 1, the mass is infinite. +As its properties are central to the well-posedness +theory, a detailed analysis of E(Rd) is provided in Section 2. At this stage, we only +mention that E(Rd) ⊂ {H(ψ) < +∞} and that E(Rd) ⊂ X1(Rd)+H1(Rd), where +X1 denotes the Zhidkov space [61, 63] defined by +(1.10) +X1(Rd) = {ψ ∈ L∞(Rd) : ∇ψ ∈ L2(Rd)}. +While E is not a vector space, we notice that +(1.11) +dE(ψ1, ψ2) = ∥ψ1 − ψ2∥X1+H1 + ∥|ψ1| − |ψ2|∥L2 + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +5 +defines a metric on E and (E, dE) is a complete metric space. We recall that for a +sum of Banach spaces, the norm is defined by +∥ψ∥X1+H1 = inf {∥ψ1∥X1 + ∥ψ2∥H1 : ψ = ψ1 + ψ2} . +Our first main result introduces local well-posedness for (1.1) in the energy space E. +It suffices to consider positive existence times. Local existence for negative times +follows from the time reversal symmetry of (1.1). +Theorem 1.4. Let d = 2, 3. Let f be such that Assumption 1.1 is satisfied, then +(1.1) is locally well-posed in the energy space E(Rd). More precisely, +(1) for any ψ0 ∈ E(Rd), there exists a maximal time of existence T ∗ > 0 and +a unique solution ψ ∈ C([0, T ∗); E(Rd)) with initial data ψ(0) = ψ0. The +blow-up alternative holds. Namely, either T∗ = ∞ or +(1.12) +lim +tրT ∗ E(ψ)(t) = +∞; +(2) ψ − ψ0 ∈ C([0, T ∗); H1(Rd)); +(3) the solution depends continuously on the initial data with respect to the +topology induced by the metric dE; +(4) it holds H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗); +(5) if in addition ∆ψ0 ∈ L2(Rd) then ∆ψ ∈ C([0, T ∗); L2(Rd)). +Note that (2) of Theorem 1.4 states that ψ and ψ0 share the same far-field +behavior, i.e. belong to the same connected component of E(Rd) for all t ∈ [0, T ∗), +see Remark 2.3 and 2.4. Moreover, it can be shown that the nonlinear flow ψ − +e +i +2 t∆ψ0 belongs to the full range of Strichartz spaces, see Proposition 3.2 and 4.1 +for d = 2, 3 respectively. The precise notion of continuous dependence on the initial +data is given in Proposition 3.2 and 4.1. The topological structure of the metric +space (E(Rd), dE) differs for d = 2 and d = 3, see [22, 23]. For d = 3, the energy +space E(R3) has an affine structure; if ψ ∈ E(R3) then ψ = c + v for some c ∈ S1, +v ∈ ˙H1(R3). For d = 2, unbounded phase oscillations may occur at spatial infinity +that rule out to characterize the connected components by a constant c ∈ S1. +The space (E(R2), dE) is not separable. Given its relevance for the well-posedness +theory, this question is going to be addressed in detail in Section 2. In particular, +one may introduce a weaker topology that restores separability and connectedness. +Note that this affine structure of the energy space is available for higher dimensions +d ≥ 4 to which our approach adapts. As E(R) ⊂ X1(R), the local well-posedness +theory simplifies for d = 1. +Previous results [20, 21, 24] do not cover the full +generality of Assumption 1.1. We expect our approach to extend to d = 1. +Assumption 1.1 is not sufficient in order to prove that the solution map is Lips- +chitz continuous. This is analogue to the case of NLS equations (1.1) with vanishing +far-field behavior. Indeed, for instance for power-law type nonlinearities (1.4) Lips- +chitz continuity of the solution map can only be expected if α ≥ 1 +2 for both vanishing +and non-vanishing far-field, see [13, Remark 4.4.5] and Section 5 respectively. Note +that while in the former continuity is intended with respect to the H1-topology, the +latter is stated with respect to the topology on E induced by the metric dE. We +identify suitable additional Assumptions that allow to prove Lipschitz continuity of +the solution map, see Theorem 1.7. +The conservation of the Hamiltonian energy H turns out to be insufficient to +show global well-posedness. Two main difficulties occur. First, we may not rely on + +6 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +the conservation of mass which is infinite. No suitable notion of a ”renormalized” +mass being conserved seems to be available. Second, the Hamiltonian H is not +sign-definite. In the case of trivial far-field, one relies on the conservation of mass +and the Hamiltonian energy provided it has a sign to infer global existence. For +sign-indefinite Hamiltonian energies, also the respective H1-theory for (1.1) fails in +general to provide global existence results without further assumptions. Blow-up +occurs for instance for certain focusing nonlinearities, see e.g. [13]. +In the framework of non-trivial far-field without further assumptions on f we +lack both conservation of mass and sign-definite Hamiltonian energy. A sufficient +condition allowing for a control of E(ψ) in terms of H(ψ) consists in assuming the +nonlinear potential F to be non-negative. +Theorem 1.5. Let d = 2, 3. Let f be such that Assumption 1.2 is satisfied and +the nonlinear potential F defined in (1.3) is non-negative, i.e. F ≥ 0, then (1.1) is +globally well-posed in the energy space E. +While to identify the optimal assumptions on f allowing for a global result goes +beyond the scope of this work, we refer the reader to Section 2.2 for a discussion of +possible generalizations. Note that the pure power-type nonlinearities (1.4) satisfy +F ≥ 0. +Furthermore, we provide a global well-posedness result for d = 3 and a class +of competing (focusing-defocusing) nonlinearities f for which the nonlinear poten- +tial fails to be non-negative. Such models are of physical relevance for instance +in nonlinear optics when self-focusing phenomena in a defocusing background are +considered [5, 54]. +Theorem 1.6. Let d = 3, f be such that Assumptions 1.2 are satisfied and further +be of the form +f(r) = a(rα1 − 1) + g(r) +with a > 0, 0 < α1 < 2 and where g ∈ C0([0, ∞)) ∩ C1(0, ∞) is such that +|g(ρ)|, |ρg′(ρ)| ≤ C(1 + ρα2) +for all ρ ≥ 0 with 0 ≤ α2 < α1. In addition, let F be such that F(ρ) > 0 for all +ρ > 1. +Then (1.1) is globally well-posed in the energy space E(R3). +The assumption on the roots of F allows for physically relevant nonlinearities +to be studied. It appears from the physics literature [5, 45, 54] that in relevant +applications the largest root of F corresponds to the far-field behavior ρ0 = 1 and +constitutes a local minimum of F which is linked to the modulational stability of +the continuous background wave [44, 54]. To obtain global existence, we rely on +the aforementioned affine structure of the energy space E(R3) and consider the +quantity M(ψ) = H(ψ) + C0∥|ψ|2 − 1∥2 +L2 which we show to satisfy an exponential +bound in time. +As for d = 2 rapid and unbounded phase oscillations at spatial infinity may +occur, the problem of global existence for (1.1) with defocusing nonlinearities of +low regularity and non-sign definite Hamiltonian energies appear to be much more +intricate. To the best of our knowledge, the problem of global existence for (1.1) +with (1.2) and focusing nonlinearities remains open. +We complement our analysis by identifying a sufficient condition for f in order +to prove Lipschitz continuity of the solution map. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +7 +Theorem 1.7. Let d = 2, 3 and f satisfy Assumption 1.1. If in addition, +(1.13) +f ∈ C1([0, ∞)) ∩ C2((0, ∞)), +|√ρf ′(ρ)| , +���ρ +3 +2 f ′′(ρ) +��� ≤ C(1 + ρmax{0,α− 1 +2 }), +then the solution map is Lipschitz continuous on bounded sets of E(Rd). +Namely, for any r, R > 0 and ψ∗ +0 ∈ E(Rd) such that E(ψ∗ +0) ≤ R let Or := +{ψ0 ∈ E(Rd) : d(ψ0, ψ∗ +0) ≤ r}. +Then, there exists T ∗(Or) > 0 such that ψ ∈ +C([0, T ∗); E(Rd)) for all initial data ψ(0) = ψ0 ∈ Or. Moreover, for any 0 < T < +T ∗(Or) there exists C > 0 such that for any ψ1, ψ2 ∈ C([0, T ]; E(Rd)) with initial +data ψ1 +0, ψ2 +0 ∈ Or it holds +(1.14) +sup +t∈[0,T ] +dE(ψ1(t), ψ2(t)) ≤ CdE(ψ1 +0, ψ2 +0). +Provided that the solutions are global, then the Lipschitz continuity holds for +arbitrary times, see Corollary 5.1. +The main steps of our approach are briefly sketched. +First, we identify the +suitable mathematical setting for our analysis, namely the energy space E, see +(1.8). We crucially rely on the fact that (E, dE) is a complete metric space as well +as the properties of the free propagator introduced in [22, 23]. The Hamiltonian H +is well-defined for functions in E. While wave-functions in d = 3 can be decomposed +as ψ = c + v with |c| = 1, c ∈ C and v ∈ ˙H1(R3), for d = 2 the wave-functions +may exhibit unbounded oscillations of the phase at spatial infinity. This motivates +to treat separately the well-posedness problem for d = 2, 3. In both cases, we show +local existence of a solution in the affine space ψ = ψ0 + H1(Rd) by a perturbative +Kato-type argument [38] and also [13, Chapter 4]. +Subsequently, uniqueness in +C([0, T ]; E(Rd)) is proven. +The fixed-point argument only provides continuous +dependence with respect to perturbations in the space ψ0 + H1(Rd). The proof of +continuous dependence on the initial data with respect to the topology induced by +the metric dE requires additional estimates and differs in a substantial way from the +H1-well-posedness theory for NLS-equations with vanishing conditions at infinity. +This is due to the non-integrability of wave-functions and the intricate topological +structure of the energy space linked to the far-field behavior including oscillations +of the phase and the low regularity of the nonlinearity. Global well-posedness is +shown relying on the conservation of the Hamiltonian H. +While our method for the 3D-theory exploits the particular structure of the +energy space, the approach used for d = 2 can easily be adapted to sub-cubic +nonlinearities for d = 3. However, for super-cubic nonlinearities, we exploit the +affine structure of E(R3). It is then no longer sufficient to work in L2-based spaces +as done for d = 2 but we need that the gradient of the solution belongs to the full +range of Strichartz spaces. +Our approach enables us to weaken the regularity assumptions compared to +previous papers. In [21, 52] the authors rely on a decomposition of the initial data +as ψ = ϕ + H1 with ϕ ∈ C∞ +b +and develop a well-posedness theory in the affine +space ϕ + H1. This approach requires additional regularity assumptions on f that +are not needed for our approach. +As it will become clear from the proofs, our method adapts to prove well- +posedness for energy-sub-critical nonlinearities for d ≥ 4. For the energy-critical +critical quintic equation, one may proceed as described in Section 1.2. + +8 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +We conclude this section by providing further examples of physical relevance +that enter the class of nonlinearities characterised by Assumption 1.1. +Example 1.8. Beyond the mentioned power-type nonlinearities, the following are +examples of physically relevant nonlinearties and far-field (1.2): +(1) competing nonlinearities f(ρ) = aρα1 − bρα2 + c with a, b, c > 0 and σ1 ≥ +σ2 ≥ 0 that arise in the description of self-focusing phenomena in defocusing +media [48, 45, 54], see also [57, 63], +(2) saturated nonlinearities f(ρ) = +ρ +1+γρ − +1 +1+γ with γ > 0, see for instance [57, +Chapter 9.3] and references therein, +(3) exponential nonlinearities f(ρ) = (e−γ − e−γρ) with γ > 0 [57, Chapter +9.3], +(4) transiting nonlinearities of the form f(ρ) = 2ρ +� +1 + α tanh +� +γ(ρ2 − 1 +�� +oc- +curring in nonlinear optics [54, Section VI], +(5) logarithmic nonlinearities of type f(ρ) = ρ log(ρ) which arise in the context +of dilute quantum gases, see [12] and references therein, +(6) the nonlinearity f(ρ) = ρ−1(ρ − 1) arises in the study of 1D-NLS type +equations as model for nearly parallel vortex filaments, see [46] and [4, Eq. +(1.5)]. +The cubic-quintic equation (1.16) falls within (1) of the aforementioned list and is +also recovered in the small amplitude approximation of (2) and (3) of the above +examples [57, Chapter 9.3]. +1.2. The energy-critical equation. +We briefly discuss the Cauchy problem for +the energy-critical equation for d = 3, namely the quintic equation +(1.15) +i∂tψ = −1 +2∆ψ + (|ψ|4 − 1)ψ. +The well-posedness of (1.15) is not addressed by Theorem 1.4. Local well-posedness +for small data is introduced in [21, Theorem 1.3]. Furthermore, note that the cubic- +quintic equation +(1.16) +i∂tψ = −1 +2∆ψ + +� +α5|ψ|4 − α3|ψ|2 + α1 +� +ψ. +with α1, α3, α5 > 0, α2 +3 − 4α1α5 > 0 and far-field (1.2) is known to be globally well- +posed in the respective energy space due to [43]. The cubic-quintic nonlinearity +considered satisfies Assumption 1.2 and is such that F(1) = 0 and F(ρ) > 0 for all +ρ > 1. The authors rely on the affine structure of the respective energy space for +d = 3, the perturbative approach introduced in [58, 59] and the well-posedness of +the energy-critical nonlinear Schr¨odinger equation with trivial far-field [17]. This +approach can be adapted to show global well-posedness of (1.15). More precisely, +it is straightforward to update the perturbative argument, see [43, Eq. (1.14) and +(1.15)] to the respective problem for (1.15), see also (4.3). +1.3. Outline of the paper. +The remaining part of the paper is structured as +follows. Section 2 provides preliminary results on the energy space E, its structure +and the action of the Schr¨odinger group on E. Useful estimates for the nonlinearity +are collected. Section 3 introduces first local and second global well-posedness for +d = 2. More precisely, Theorem 1.4 and Theorem 1.5 are proven for d = 2. In +Section 4, we provide the respective proofs for d = 3. Further, Theorem 1.6 is + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +9 +proven. Finally, Section 5 is devoted to the proof of Theorem 1.7 and Corollary +5.1. +1.4. Notations. +We fix some notations. We denote by Ld the d-dimensional +Lebesgue measure. The usual Lebesgue spaces are denoted by Lp(Ω) for Ω ⊂ Rd +and Lebesgue exponent p ∈ [1, ∞]. Sobolev spaces are denoted by Hs(Rd) with +norm ∥f∥Hs(Rd) = ∥ ⟨ξ⟩s ˆf∥L2, where ˆf denotes the Fourier transform. For k ∈ +Z and r ∈ [1, ∞], we denote W k,r for the Sobolev space with norm ∥f∥W k,r = +� +|α|≤k ∥Dαf∥Lr(Rd). Mixed space-time Lebesgue or Sobolev spaces are indicated +by Lp(I; W k,r(Rd)). +To shorten notations, we write Lp +t W k,r +x +when there is no +ambiguity. Further, C(I; Hs(Rd)) and C(I; E(Rd)) denote the space of continuous +Hs- and E-valued functions respectively. +Finally, C > 0 denotes any absolute +constant. +2. The energy space and the linear propagator +In the present paper, we define the energy space E as in (1.8), see also [16, +Section 2]. For the GP equation (1.5), being the prototype for (1.1) with non- +vanishing far-field, the energy space considered in [22, 23] consists of the set of +wave-functions of finite Ginzburg-Landau energy EGL(ψ) is more convenient when +dealing with general nonlinearities f. In general, E ⊂ {H(ψ) < +∞} while the +converse inclusion only holds under further assumptions on f. The energy space +(E, dE), endowed with the metric (1.11) can be shown to be a complete metric +space and be thought of as the analogue of H1 for NLS equations with trivial +far-field. However, E is not a vector space and wave functions ψ ∈ E(Rd) may +exhibit oscillations at spatial infinity, in particular for low dimensions. A suitable +characterisation of the energy space and the action of the Schr¨odinger semigroup on +E is essential for the subsequent well-posedness theory. Despite many of the facts +proven here can be found in the literature [22, 23, 16], we provide a self-contained +characterisation of the energy space E. +We start by proving that any ψ ∈ E(Rd) can be decomposed as sum of a X1- +function and an H1-function, where the Zhidkov space X1(Rd) is defined in (1.10). +Following [22, Lemma 1], let χ ∈ C∞ +c (C, R) be a smooth cut-off function such that +(2.1) +χ(z) = 1 +|z| ≤ 2, +χ(z) ≤ 1 +z ∈ C, +supp(χ) ⊂ B3(0). +In particular, given a wave-function ψ : Rd → C we introduce +(2.2) +ψ∞ := χ(ψ)ψ, +ψq := (1 − χ(ψ))ψ +for which we have the following bounds. +Lemma 2.1. The energy space (E(Rd), dE) with dE defined by (1.11) is a complete +metric space and is embedded in X1(Rd) + H1(Rd). In particular, for any ψ ∈ E +one has +∥ψ∞∥X1(Rd) ≤ C +� +1 + +� +E(ψ) +� +, +∥ψq∥H1(Rd) ≤ C +� +E(ψ). +Moreover, the energy space is stable under H1 perturbations, in the sense that +E(Rd) + H1(Rd) ⊂ E(Rd) with +(2.3) +E(ψ + u) ≤ 2E(ψ) + 2∥u∥2 +H1(Rd). +For d = 1, one has E(R) ⊂ X1(R) due to Sobolev embedding. + +10 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Proof. Given the decomposition (2.2), we show that ψ∞ ∈ X1(Rd). +As ψ∞ ∈ +L∞(Rd) it suffices to check that +∥∇ψ∞∥L2(Rd) = ∥χ(ψ)∇ψ + ψχ′(ψ)∇ψ∥L2(Rd) ≤ C∥∇ψ∥L2(Rd). +The bound ψq ∈ L2(Rd) follows from the pointwise inequality |ψq| ≤ C ||ψq| − 1| +valid on the support of 1 − χ(ψ) and +∥∇ψq∥L2(Rd) ≤ C∥∇ψ∥L2(Rd). +To prove (2.3), it suffices to observe that if ψ ∈ E(Rd) and u ∈ H1(Rd), then +∥∇(ψ + u)∥2 +L2(Rd) ≤ 2∥∇ψ∥2 +L2(Rd) + 2∥∇u∥2 +L2(Rd), +∥|ψ + u| − 1∥2 +L2(Rd) ≤ 2∥|ψ| − 1∥2 +L2(Rd) + 2∥u∥2 +L2(Rd) +by means of Minkowski’s inequality. It remains to prove that (E, dE) is a complete +metric space. One readily verifies that dE defines a distance function on E(Rd). To +check that (E, dE) is complete, let {ψn}n ⊂ E be a Cauchy sequence w.r.t to dE. +Then, there exists ψ ∈ X1 + H1 such that ψn → ψ strongly in X1 + H1. By lower +semi-continuity of norms and (1.9) it follows that ψ ∈ E. +□ +2.1. The structure of the energy space depending on the dimension. +The +structure of the energy space E(Rd) is sensitive to the dimension d. To illustrate +this, we recall the following fact. Let φ ∈ D′(Rd), if ∇φ ∈ Lp(Rd) for some p < d, +then there exists c ∈ C such that φ−c ∈ Lp∗(Rd), where p∗ = +dp +d−p, see for instance +[37, Theorem 4.5.9]. Hence, if ψ ∈ E(R3), then ψ admits a decomposition ψ = c+v +where c ∈ C with |c| = 1 and v ∈ ˙H1(R3), where +(2.4) +˙H1(R3) = {v ∈ L6(R3) : ∇v ∈ L2(R3)}, +denotes the completion of C∞ +0 (R3) with respect to the L2 norm of the gradient. +This observation allows for a equivalent definition of E(R3). As in [22, Section 4], +we introduce +(2.5) +Fc = +� +v ∈ ˙H1(R3) : |v|2 + 2 Re(c−1v) ∈ L2(R3) +� +. +One readily checks that +˜δ(u, v) = ∥∇u − ∇v∥L2(R3) + ∥|u|2 + 2 Re(c−1u) − 2 Re(c−1v) − |v|2∥L2(R3) +defines a distance function on Fc. One has the following characterisation given by +[22, Proposition 4.1]. +Proposition 2.2 ([22]). For d = 3, the energy space E(R3) can be identified with +the set of functions +(2.6) +E(R3) = {ψ = c + v, c ∈ C, |c| = 1, v ∈ Fc} . +Moreover the metric function dE is equivalent to +(2.7) +δ(c + v, ˜c + ˜v) += |c − ˜c| + ∥∇v − ∇˜v∥L2(R3) + +��|v|2 + 2 Re(c−1v) − |˜v|2 − 2 Re(˜c−1˜v) +�� +L2(R3) . +In [22], the Proposition is stated for (EGL, dEGL). We prove below, see Lemma +2.6, that the two metric spaces can be identified and the equivalence of the metrics. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +11 +Remark 2.3. We observe that the connected components of E(R3) are given by +c + Fc(R3) for c ∈ C with |c| = 1. The energy space E(R3) is an affine space and +the far-field behavior is determined by c corresponding to a phase shift. The affine +structure of the energy space allows for an alternative approach to solve the Cauchy +Problem for d = 3, as observed in [22, Remark 4.5] for (1.5) and exploited in [43] +for cubic-quintic nonlinearities and far-field behavior (1.2). +Remark 2.4. The 2D energy space E(R2) lacks an affine structure due to non- +trivial oscillations at spatial infinity. Indeed, unbounded phase oscillations at spatial +infinity may occur, e.g. ψ(x) = ei(2+log |x|)β with β < 1 +2 is such that ψ ∈ E(R2), +see [22, Remark 4.2]. Moreover, the metric space (E(R2), dE) is not separable. We +refer to Remark 2.7 for a detailed discussion and a weakened topology for which +E(R2) is connected and separable. +2.2. The Hamiltonian for wave-functions in the energy space. +We observe +that if ψ ∈ E(Rd), then it follows from the Chebychev inequality that +(2.8) +Ld({||ψ| − 1| > δ} ≤ 1 +δ2 ∥|ψ| − 1∥2 +L2(Rd), +where Ld denotes the d-dimensional Lebesgue measure. +Consequently, if η ∈ +C∞ +c ([0, ∞)) with supp(η) ⊂ [ 1 +2, 3 +2] such that +(2.9) +1[ 3 +4 , 5 +4 ](r) ≤ η(r) ≤ 1[ 1 +2 , 3 +2 ](r), +then for all ψ ∈ E(Rd) the support of (1 − η(|ψ|)) is of finite Lebesgue measure +(2.10) +Ld(supp(1 − η(|ψ|))) ≤ 1 +4E(ψ). +The following inequality turns out to be handy for applications in the sequel. For +any q ∈ [1, ∞) there exists Cq > 0 such that for all φ ∈ L1 +loc(R2) with L2(supp(φ)) < ++∞ and ∇φ ∈ L2(R2) it holds +(2.11) +∥φ∥Lq(R2) ≤ Cq∥∇φ∥L2(R2) +� +L2(supp(φ) +� 1 +q , +see for instance [16, Proof of Lemma 2.1]. +For ψ ∈ E(R2), applying (2.11) to +φ = ψ(1 − η(|ψ|)) yields ψ(1 − η(|ψ|)) ∈ Lq(R2) for any q ∈ [1, ∞). Indeed, it +suffices to check that +∇ (ψ(1 − η(|ψ|))) = (1 − η(ψ))∇ψ − η′(ψ)ψ∇|ψ| ∈ L2(R2) +since (1 − η(ψ)) ∈ L∞(R2), ψη′(ψ) ∈ L∞(R2) as well as |∇|ψ|| ≤ |∇ψ| a.e. on R2. +Under Assumption 1.1, the functional H(ψ), introduced in (1.9), is bounded for +all ψ ∈ E(Rd). +Lemma 2.5. For d = 2, 3 and f satisfying Assumption 1.1 one has +E(Rd) ⊂ {ψ : |H(ψ)| < +∞} . +Proof. In view of (K1) Assumption 1.1, it suffices to use a Taylor expansion of F +in a small neighborhood O of 1 to show that there exist C, C′ > 0 such that +F(|ψ|2) ≤ C′(|ψ|2 − 1)2 ≤ C(|ψ| − 1)2, +for all x ∈ Rd such that |ψ|2 ∈ O. Let δ > 0 such that B(1, δ) ⊂ O and ηδ(r) := +η( r +δ ) with η as in (2.9) and ψ ∈ E(Rd), then +� +Rd F(|ψ|2)dx = +� +Rd F(|ψ|2)ηδ(|ψ|)dx + +� +Rd F(|ψ|2)(1 − ηδ(|ψ|))dx + +12 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +≤ C +� +Rd ||ψ| − 1|2 dx + C +� +Rd +� +1 + |ψ|2α� ��|ψ|2 − 1 +�� (1 − ηδ(|ψ|))dx, +where we used (K2) Assumption 1.1 in the last inequality. To control the second +term, we consider separately the cases d = 2, 3. For d = 3, Proposition 2.2 yields +that there exists c ∈ C with |c| = 1 and v ∈ Fc(R3) such that ψ = c + v and +� +R3 +� +1 + |ψ|2α� ��|ψ|2 − 1 +�� (1 − ηδ(|ψ|))dx +≤ C +� +Rd(1 − ηδ(|ψ|))χ(ψ)dx + +� +R3 |c + v|2(α+1)(1 − χ(ψ))dx +≤ CE(ψ) + ∥v∥2(1+α) +L6 +E(ψ) +2−α +3 +≤ C +� +E(ψ) + E(ψ) +5+2α +3 +� +, +where we used (2.10) in the second last inequality and that 0 < α < 2 for d = 3. +For d = 2, one has that +� +R2 +� +1 + |ψ|2α� ��|ψ|2 − 1 +�� (1 − ηδ(|ψ|))dx +≤ C +� +Rd(1 − ηδ(|ψ|))χ(ψ)dx + +� +Rd +� +1 + |ψ|2(α+1)� +(1 − χ(ψ))dx +The first integral is bounded by CE(ψ) and for the second it follows from (2.11) +that +∥ψ(1 − χ(|ψ|))∥2(α+1) +L2(α+1)(R2) ≤ E(ψ)1+αL2(supp(ψ(1 − χ(ψ)))) ≤ E(ψ)2+α. +This allows one to bound +� +R2(1 + |ψ|2α) +��|ψ|2 − 1 +�� (1 − ηδ(ψ))dx ≤ E(ψ) + E(ψ)2+α. +□ +Next, we identify suitable conditions on f under which the converse inclusion, +namely {ψ : |H(ψ)| < +∞} ⊂ E(Rd), holds true. First, we treat the particular +case of the Gross-Pitaevskii equation (1.5) for which H(ψ) = EGL(ψ) and thus +EGL(Rd) = {H(ψ) < +∞}, see (1.6) and (1.7) respectively. It has been shown in +[22], see also [23], that (EGL, dEGL) with +(2.12) +dEGL(ψ1, ψ2) = ∥ψ1 − ψ2∥X1+H1 + ∥|ψ1|2 − |ψ2|2∥L2. +is a complete metric space. +It is pointed out in [16, p.13] without proof that +E = EGL with equivalence of the respective metrics. We provide a proof for the +sake of completeness. +Lemma 2.6. Let d ≥ 1, then E(Rd) = EGL(Rd). Moreover, for d = 2, 3 and any +R > 0, there exists C = C(R) > 0 such that for any ψ1, ψ2 with E(ψi) ≤ R for +i = 1, 2 it holds +(2.13) +1 +C dEGL(ψ1, ψ2) ≤ dE(ψ1, ψ2) ≤ CdEGL(ψ1, ψ2). +Moreover, there exists C > 0 such that for ψ1, ψ2 ∈ E(Rd)) and u, v ∈ H1(Rd) it +holds +(2.14) +dE(ψ1 + u, ψ2 + v) + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +13 +≤ C +� +1 + +� +E(ψ1) + +� +E(ψ2) + ∥u∥H1 + ∥v∥H1 +� +(dE(ψ1, ψ2) + ∥u − v∥H1) . +Remark 2.7. Lemma 2.6 allows to infer the topological properties of (E, dE) from +the results for (EGL(Rd), dEGL) in [22, 23]. For instance, the functional E measures +the distance to the circle of constants S1 = {ψ ∈ E : E(ψ) = 0} for d = 3 but not +for d = 2. Indeed, it follows from Lemma 2.6 and [22, Proposition 4.3] that there +exists A > 0 such that for every ψ ∈ E(R3), +1 +AdE(ψ, S1)2 ≤ EGL(ψ) ≤ CdE(ψ, S1)2. +If d = 2, there exists a sequence {ψn} in E(R2) such that E(ψn) → 0 but dE(ψn, S1) ≥ +c0 > 0. Note that the complete metric space (EGL(R2), dEGL) lacks an affine struc- +ture and to be separable. In [23] a detailed characterisation of EGL(Rd) including +a manifold structure for EGL(Rd) is provided. +The connected components are +characterised by [23, Theorem 1.8] and [23, Proposition 1.10]. A (strictly) weaker +topology [23, p. 140] induced by the metric +d′ +E(ψ1, ψ2) := ∥ψ1 − ψ2∥L2(B(1,0)) + ∥∇ψ1 − ∇ψ2∥L2(R2) + ∥|ψ1|2 − |ψ2|2∥L2(R2) +is introduced. It follows that (E, d′ +E) is connected. Relying on the decomposition of +elements of E provided by [23, Theorem 1.8], one can show that (E, d′ +E) is separable. +If one only requires continuity of the solution map with respect to this weakened +topology, the proof of Proposition 3.2 can be simplified. This metric has widely +been used in the study of the stability of special solutions for d = 1. We refer to +[47], where the authors introduce new energy spaces for (1.5) and d = 1 in order to +tackle global well-posedness in the energy space at Hs-regularity. +Proof. We start by showing that there exists C > 0 such that +∥|ψ1| − |ψ2|∥L2(Rd) ≤ C +� +∥|ψ1|2 − |ψ|2∥L2(Rd) + ∥∇ψ1 − ∇ψ2∥L2(Rd) +� +. +Indeed, let χ6(z) = χ(6z) with χ defined in (2.1), then +∥|ψ1| − |ψ2|∥L2(Rd) +≤ ∥|ψ1|χ6(ψ1)−|ψ2|χ6(ψ2)∥L2(Rd)+∥|ψ1|(1−χ6(ψ1))−|ψ2|(1−χ6(|ψ2|))∥L2(Rd). +The second contribution can be bounded by +∥|ψ1|(1 − χ6(ψ1)) − |ψ2|(1 − χ6(ψ2))∥L2(Rd) ≤ C∥|ψ1|2 − |ψ2|2∥L2(Rd). +Next, we notice that for i = 1, 2, the support of χ6(ψi) is of finite measure as +ψi ∈ E(Rd), see (2.8). For d = 2, by invoking (2.11) applied to φ = |ψ1|χ6(|ψ1|) − +|ψ2|χ6(|ψ2|), we conclude that +∥|ψ1|χ6(|ψ1|) − |ψ2|χ6(|ψ2|)∥L2(R2) +≤ C +�� +E(ψ1) + +� +E(ψ2) +� � +∥ψ1 − ψ2∥X1+H1(R2) + ∥|ψ1|2 − |ψ2|2∥L2(R2) +� +. +For d = 3, one proceeds similarly exploiting the decomposition ψi = ci + vi, vi ∈ +Fc(R3) and Proposition 2.2. It holds +∥|ψ1|χ6(|ψ1|) − |ψ2|χ6(|ψ2|)∥L2(R3) +≤ C +� +1 + +� +E(ψ1) + +� +E(ψ2) +� � +|c1 − c2| + ∥∇v1 − ∇v2∥L2(R3) +� +≤ C(R)dEGL(ψ1, ψ2). + +14 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Next, we show that there exists C = C(R) > 0 such that +∥|ψ1|2 − |ψ2|2∥L2(Rd) ≤ C1 +� +∥|ψ1| − |ψ2|∥L2(Rd) + ∥ψ1 − ψ2∥X1+H1(Rd) +� +. +It suffices to notice that +∥|ψ1|2χ(ψ1) − |ψ2|2χ(ψ2)∥L2(Rd) ≤ C1∥|ψ1| − |ψ2|∥L2(Rd), +while +∥|ψ1|2(1 − χ(ψ1)) − |ψ2|2(1 − χ(ψ2))∥L2(Rd) +≤ C +� +1 + +� +E(ψ1) + +� +E(ψ2) + ∥ψ1,q∥L4(Rd) + ∥ψ2,q∥L4(Rd) +� +∥ψ1,q − ψ2,q∥L4(Rd) +≤ 2C +� +1 + +� +E(ψ1) + +� +E(ψ2) +� +∥ψ1,q − ψ2,q∥L4(Rd). +In the second last inequality, we used that +(2.15) +|ψ|4� +1 − χ(ψ) ≤ C |ψq|4 , +with ψq defined in (2.2) which is only valid provided (1 − χ(ψ)) > θ for some small +θ > 0. However, this is harmless as +L2 ({x ∈ supp(1 − χ(ψ)) : 0 < 1 − χ(ψ) ≤ θ}) ≤ +� +E(ψ) +and |ψ| ≤ 3 on the respective set. The error can be controlled at the expense of a +factor +� +E(ψ) in the estimate. One has that +∥ψ1,q − ψ2,q∥L4(Rd) ≤ C +�� +E(ψ1) + +� +E(ψ2) +� +∥ψ1 − ψ2∥X1+H1(Rd) +by means of (2.11) for d = 2 and the decomposition provided by Proposition 2.2 +for d = 3. Finally, +∥|ψ1|2 − |ψ2|2∥L2(Rd) +≤ C +� +1 + +� +E(ψ1) + +� +E(ψ2) +� � +∥|ψ1| − |ψ2|)∥L2(Rd) + ∥ψ1 − ψ2∥X1+H1(Rd) +� +. +It remains to show (2.14). +The respective property is known for dEGL, see [22, +Lemma 2], and hence follows from the equivalence of metrics. However, we provide +a proof to track constants explicitly. Note that +∥|ψ1+u|−|ψ2+v|∥L2 ≤ ∥|ψ1+u|χ6(ψ1+u)−|ψ2+v|χ6(ψ2+v)∥L2+∥|ψ1+u|2−|ψ2+v|2∥L2, +by arguing as in the first part of the proof. By invoking (2.11), one has +∥|ψ1 + u|χ6(ψ1 + u) − |ψ2 + v|χ6(ψ2 + v)∥L2 +≤ C +�� +E(ψ1) + +� +E(ψ2) + ∥u∥H1 + ∥v∥H1 +� � +∥ψ1 − ψ2∥X1+H1(Rd) + ∥u − v∥H1� +. +For the second term, one has +∥|ψ1 + u|2 − |ψ2 + v|2∥L2 +≤ ∥|ψ1|2 − |ψ2|2∥L2 + ∥|u|2 − |v|2∥L2 + ∥2 Re(ψ1u) − 2 Re(ψ2v)∥L2 +≤ ∥|ψ1|2 − |ψ2|2∥L2 + (∥u∥H1 + ∥v∥H1) ∥u − v∥H1 ++ 2∥ Re +� +(ψ1,∞ + ψ1,q)(u − v) +� +∥L2 + 2∥ Re +�� +ψ1,q − ψ2,q + ψ1,∞ − ψ2,∞ +� +v +� +∥L2 +≤ ∥|ψ1|2 − |ψ2|2∥L2 + (∥u∥H1 + ∥v∥H1 + 1 + E(ψ1)) ∥u − v∥H1 + ∥v∥H1dE(ψ1, ψ2) +≤ C +� +1 + +� +E(ψ1 + +� +E(ψ2) + ∥u∥H1 + ∥v∥H1 +� +(dE(ψ1, ψ2) + ∥u − v∥H1) . + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +15 +□ +Next, we provide a sufficient condition on f under which the space of functions +with finite Hamiltonian energy is included in E. To that end, we require Assumption +1.2 to be satisfied. From F(1) = F ′(1) = f(1) = 0 and Taylor expansion it follows +(2.16) +F(r) ≃ 1 +2f ′(1)(r − 1)2 +in a small neighborhood of 1. +Hence, there exists δ > 0 such that for all r ∈ +(1 − δ, 1 + δ) there exists C1, C2 > 0 such that +(2.17) +1 +C2 +(|ψ| − 1)2 ≤ 1 +C1 +(|ψ|2 − 1)2 ≤ F(|ψ|2) ≤ C1(|ψ|2 − 1)2 ≤ C2(|ψ| − 1)2 +provided that ||ψ|2 − 1| < δ. +The nonlinear potential F is locally convex in a +neighborhood of 1. It was shown in [16, Lemma 4.8] that requiring in addition that +the nonlinear potential is non-negative, namely F ≥ 0 and hence the Hamiltonian +energy is sign-definite, implies that E = {H(ψ) < ∞}. Note that the condition +F ≥ 0 is for instance satisfied for the pure power-type nonlinearities in (1.4). +Lemma 2.8. Let d = 2, 3 and Assumptions 1.2 be satisfied. If in addition F ≥ 0, +then +E = {H(ψ) < ∞}. +In particular, there exists an increasing function g : (0, ∞) → [0, ∞) with lim +r→0 g(r) = +0 such that +(2.18) +E(ψ) ≤ g (H(ψ)) . +By exploiting Lemma 2.6 and the conservation of the Hamiltonian along solutions +to (1.1), it is then possible to extend the local solutions globally in time. Notice +that when in the framework of NLS equations with trivial far-field, the blow-up +alternative is given in terms of the H1-norm, whereas here it involves E(ψ). In the +classical, integrable case, it is possible to infer the analogue of (2.18) under less +restrictive assumptions on F; for instance it is possible to consider mass-subcritical +focusing nonlinearities. In this case indeed the analogue of (2.18) is derived by +exploiting Gagliardo-Nirenberg inequalities. However, the lack of a suitable control +of the mass in our case prevents us from considering more general nonlinearities. +Proof. We sketch of the proof, see [16] for full details. First, we borrow from [16, +Equation (1.18)] the following equivalent definition of EGL(Rd) = E(Rd). +Let +ϕ ∈ C∞(R) be such that ϕ(r) = r for r ∈ [0, 2], 0 ≤ ϕ′ ≤ 1 on R and ϕ(r) = 3 for +r ≥ 4. We define the modified Ginzburg-Landau energy +EmGL(ψ) = +� +Rd |∇ψ|2 + 1 +2 +� +ϕ(|ψ|)2 − 1 +�2 dx. +The functional EGL is well-approximated by EmGL. +Indeed, it is shown in [16, +Section 2] that +EGL(Rd) = {ψ ∈ L1 +loc(Rd) : ∇ψ ∈ L2(Rd), ϕ(|ψ|)2 − 1 ∈ L2(Rd)}. +Since |ϕ(|ψ|)2 − 1| ≤ 4||ψ| − 1|, one has ϕ(|ψ|)2 − 1 ∈ L2(Rd) if ψ ∈ E(Rd). +For the converse, see [16, Lemma 2.1]. +We sketch the main idea. +On the set +where |ψ(x)| ≤ 2, one has ϕ(|ψ|)2 = |ψ|2 and hence the desired bound follows. +Further, Ld({x : ||ψ(x)| − 1| > 3 +2}) < +∞ from the Chebychev inequality (2.8) + +16 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +if ϕ(|ψ|)2 − 1 ∈ L2(Rd). By means of (2.11) for d = 2 and Sobolev embedding +for d = 3 one concludes. Finally, there exists C > 0 and an increasing function +m : R+ → R+ with lim +r→0 m(r) = 0 such that +1 +4EmGL(ψ) ≤ E(ψ) ≤ Cm (EmGL(ψ)) , +see [16, Corollary 4.3]. Second, we note that it suffices to establish inequality (2.18) +for E replaced by EmGL. In virtue of (2.17), it suffices to consider the region where +{x : ||ψ| − 1| ≥ δ}. If inf F > 0 on {x : ||ψ| − 1| ≥ δ}, then it is clear that +� +{||ψ|−1|≥δ} +� +ϕ(|ψ|)2 − 1 +�2 dx ≤ C +� +{||ψ|−1|≥δ} +F(|ψ|2)dx. +It follows that E(ψ) can be controlled in terms of H(ψ). More in general, provided +that F ≥ 0, it follows from [16, Lemma 4.8] that for all ψ with |H(ψ)| < ∞ there +exist C1 = C1(H(ψ)) > 0 and C2 = C2(H(ψ)) > 0 such that +C1 (H(ψ)) ≤ EmGL(ψ) ≤ C2 (H(ψ)) . +The statement of Lemma 2.8 follows. +□ +Remark 2.9. System (1.1) is closely related to the QHD system with non-trivial +far-field. In a reminiscent analysis, the regularity and integrability properties of +its unknowns (ρ, J) corresponding to the mass density ρ = |ψ|2 and momentum +density J = Im(ψ∇ψ) are then captured in terms of Orlicz spaces, see [3] and [35, +Chapter 2] as well as [2, 36] for the respective uniform bounds for solutions to the +quantum Navier-Stokes equations, a viscous regularization of the QHD system. +2.3. Smooth approximation. +Elements of the energy space can be approxi- +mated by smooth functions via convolution with a smooth mollifier. +Lemma 2.10. Let ψ ∈ E(Rd), then there exists {ψn}n∈N ⊂ C∞(Rd)∩E(Rd) such +that +dE(ψ, ψn) → 0, +as n → 0. Moreover, for any ψ ∈ E(Rd), there exists ϕ ∈ C∞ +b (Rd) ∩ E(Rd) such +that ∇ϕ ∈ H∞(Rd) and +(2.19) +ψ − ϕ ∈ H1(Rd). +The first statement is proven in [22, Lemma 6] by considering the convolution +with a standard mollification kernel and the second statement follows from [21, +Proposition 1.1.]. In [22, 21], the statements are given for (EGL, dEGL) being equiv- +alent to (E, dE) by virtue of Lemma 2.6. +2.4. Action of the linear propagator on the energy space. +The action of +the linear Schr¨odinger group on the space Xk(Rd) + Hk(Rd) is well-defined, see +[22, Lemma 3] and also [23]. While the results in [22, 23] are stated for (EGL, dEGL), +we state them (E, dE) which by Lemma 2.6 is equivalent. +Lemma 2.11 ([22]). Let d be a positive integer. For every k, for every t ∈ R, the +operator e +i +2 t∆ maps Xk(Rd) + Hk(Rd) into itself and it satisfies +(2.20) +∥e +i +2 t∆f∥Xk+Hk ≤ C (1 + t) +1 +2 ∥f∥Xk+Hk, +and +(2.21) +∥e +i +2 t∆f − f∥L2 ≤ C|t| +1 +2 ∥∇f∥L2. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +17 +Moreover, if f ∈ Xk(Rd) + Hk(Rd), the map t ∈ R �→ e +i +2 t∆f ∈ Xk(Rd) + Hk(Rd) +is continuous. +For d = 1, we notice that Xk(R) + Hk(R) ⊂ Xk(R) for any k positive integer. +The action of e +i +2 t∆ on X1(R) has been studied in [61, 63], see also [20] for the +action of the linear propagator on Zhidkov spaces Xk(Rd) with d > 1. +The action of the linear Schr¨odinger group on the space E(Rd) is described by +[22, Proposition 2.3]. +Proposition 2.12 ([22]). Let d = 2, 3. For every t ∈ R, the linear propagator e +i +2 t∆ +maps E(Rd) to itself and for every ψ ∈ E(Rd) the map t ∈ R �→ e +i +2 t∆ψ0 ∈ E(Rd) +is continuous. Moreover, given R > 0, T > 0 there exists C > 0 such that for every +ψ1 +0, ψ2 +0 ∈ E(Rd) with E(ψ1 +0) ≤ R, E(ψ2 +0) ≤ R one has +(2.22) +sup +|t|≤T +dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) ≤ CdE(ψ1 +0, ψ2 +0). +Further, given R > 0, there exists T (R) > 0 such that, for every ψ0 ∈ E(Rd) with +E(ψ0) ≤ R, we have +(2.23) +sup +|t|≤T (R) +E(e +i +2 t∆ψ0) ≤ 2R. +Corollary 2.13. Let d = 2, 3 and ψ0 ∈ E(Rd), then +(2.24) +lim +t→0 +e +i +2 t∆ψ0 − ψ0 +t += − i +2∆ψ0 +in H−1(Rd). +In particular, e +i +2 t∆ψ0 ∈ C(R; E(Rd)) ∩ C1(R, H−1(Rd)). +Proof. Note that e +i +2 t∆ψ0 − ψ0 ∈ L2(Rd) for any finite time t ∈ R by virtue of +(2.21). For any φ ∈ H1(Rd), it follows from Plancherel’s identity and the dominated +convergence theorem that +lim +t→0 +� +Rd +e +i +2 t∆ψ0 − ψ0 +t +φ(x)dx = lim +t→0 +� +Rd +e +i +2 t|ξ|2 ˆψ0 − ˆ +ψ0 +t +ˆφ(ξ)dξ += lim +t→0 +� +Rd +i +2|ξ|2 +�� 1 +0 +eits|ξ|2� +ˆ +ψ0(ξ)ˆφ(ξ)dξ = +� +Rd(− i +2∆ψ0(x))φ(x)dx. +The identity (2.24) follows. +□ +2.5. Strichartz estimates. +We say that a pair (q, r) is (Schr¨odinger) admissible +if q, r ≥ 2 such that +2 +q + d +r = d +2, +(q, r, d) ̸= (2, ∞, 2), +and we recall the well-known Strichartz estimates, see [40] and references therein. +Lemma 2.14. Let d = 2, 3 and (q, r) be an admissible pair. +Then the linear +propagator satisfies, +∥e +i +2 t∆u∥Lq([0,T ];Lr(Rd)) ≤ C∥u∥L2(Rd), +and for any (q1, r1) admissible pair one has +(2.25) +���� +� t +0 +e +i +2 (t−s)∆f(s)ds +���� +Lq([0,T ];Lr(Rd) +≤ C∥f∥Lq′ +1([0,T ];Lr′ +1(Rd)). + +18 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Given a time interval I = [0, T ], it is convenient to introduce the Strichartz space +S0(I × Rd) characterised by the norm +∥u∥S0 := +sup +(q,r)admissible +∥u∥Lq(I;Lr(Rd)). +We notice that since (q, r) = (∞, 2) is admissible one has +(2.26) +∥u∥C(I;L2(Rd)) ≲ ∥u∥S0. +Moreover, we introduce the dual space N 0 = (S0(I × Rd))∗ satisfying the estimate +(2.27) +∥f∥N 0 ≲ ∥f∥Lq′ +1(I;Lr′ +1(Rd)), +for any admissibile pair (q1, r1). Further, in order to discuss the well-posedness +theory for (1.1) in the energy space, we also work with the function space S1(I×Rd) +and N 1(I × Rd) defined by the norms +(2.28) +∥u∥S1 = ∥u∥S0 + ∥∇u∥S0, +∥G∥N 1 = ∥G∥N 0 + ∥∇G∥N 0. +While ψ ̸∈ S0 for any solution to (1.1) to l in any Strichartz space S0, it will turn +out that the nonlinear flow belongs to S1. +Remark 2.15. Let T > 0 and ψ0 ∈ E(Rd), then Lemma 2.14 states that for any +admissible pair (q, r) it holds +(2.29) +���e +i +2 t∆∇ψ0 +��� +Lq([0,T ];Lr(Rd)) ≤ ∥∇ψ0∥L2(Rd). +In virtue of Lemma 2.11, one has e +i +2 t∆ψ0 − ψ0 ∈ C([0, T ]; H1(Rd)) and ∇e +i +2 t∆ψ0 ∈ +C([0, T ]; L2(Rd)) ∩ S0([0, T ] × Rd)). +2.6. The nonlinearity. +We collect some properties of the nonlinearity N(ψ) = +f(|ψ|2)ψ, with f satisfying Assumption 1.1, that will be used in the sequel. By +applying smooth cut-off functions, we separate the behavior close and away from +|ψ| = 1. Let η ∈ C∞ +c (R+) be given by (2.9), we define +(2.30) +N1(ψ) := N(ψ)η(|ψ|), +N2(ψ) := N(ψ)(1 − η(|ψ|)). +By means of the cut-off χ defined in (2.1), we further split N2 as +(2.31) +N2,∞ = N2(ψ)χ(2ψ), +N2,q(ψ) = N2(ψ)(1 − χ(2ψ)) +and notice that +(2.32) +|N1(ψ)| ≤ C ||ψ| − 1| , +|N2,∞(ψ)| ≤ C(1 − η(|ψ|), +|N2,q(ψ)| ≤ C|ψ|2α+1(1 − χ(ψ)). +In the case of vanishing boundary conditions and infinity, the strategy developed +in [38], see also [13, Chapter 4], relies on similar pointwise bounds on N. However, +here we need to consider additional cut-off functions η isolating the behavior close +to 1 in view of the far-field and the related support properties. Note that (2.8) +yields that the measure of supp(N2(ψ)) is bounded by E(ψ). The quantity ∇N +can be rigorously defined by means of Nemicki operators, see [38, Appendix A] and +also [39, 13]. It reads +(2.33) +∇N(ψ) = +� +f(|ψ|2) + f ′(|ψ|2)|ψ|2� +∇ψ + f ′(|ψ|2)ψ2∇ψ, +so that we have +(2.34) +|∇N(ψ)| ≲ (|f(ρ) + ρf ′(ρ)| + |ρf ′(ρ)|) |∇ψ|. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +19 +Inequalities (2.32) and (2.34) will allow us to infer bounds on the nonlinearity in the +Strichartz space N 1 defined in (2.28). Moreover, (K2) of Assumption 1.1 implies +that the nonlinearity N(ψ) is locally Lipschitz. More precisely, +(2.35) +|N(ψ1) − N(ψ2)| ≤ C +� +1 + |ψ1|2α + |ψ2|2α� +|ψ1 − ψ2|. +For general ψ1, ψ2 ∈ E(Rd) one has ψ1 − ψ2 /∈ Lp(Rd) for any p ≥ 1, unless ψ1, ψ2 +belong to the same connected component of E(Rd), see Remark 2.3 and 2.4 for +d = 2, 3 respectively. This motivates the following estimates, +(2.36) +|N1(ψ1) − N1(ψ2)| ≤ C|ψ1| ||ψ1| − |ψ2|| + ||ψ2| − 1| η(|ψ2|)|ψ1 − ψ2|, +|N2,∞(ψ1) − N2,∞(ψ2)| ≤ C |ψ1 − ψ2| , +|N2,q(ψ1) − N2,q(ψ2)| ≤ C +� +|ψ1|2α + |ψ2|2α� +|ψ1 − ψ2| . +Inequalities (2.36) will then lead to respective bounds in Strichartz space N 0. +Similarly, we introduce the following estimates for ∇N(ψ). One has +∇N(ψ) = DN(ψ) · +�∇ψ +∇ψ +� += +�G1(ψ) +G2(ψ) +�T +· +�∇ψ +∇ψ +� +, +where +(2.37) +G1(ψ) = f(|ψ|2) + f ′(|ψ|2)|ψ|2, +G2(ψ) = f ′(|ψ|2)ψ2. +We define +Gi,∞(ψ) := Gi(ψ)χ(ψ), +Gi,q(ψ) := Gi(ψ)(1 − χ(ψ)), +for i = 1, 2. For the sake of a shorter notation we introduce +(2.38) +G∞ := G1,∞ + G2,∞, +Gq := G1,q + G2,q. +In particular we observe that Assumption 1.1 yields that +(2.39) +|G∞(ψ)| ≤ C, +|Gq(ψ)| ≤ C(1 + |ψq|2α)(1 − χ(ψ)). +3. 2D well-posedness +Local well-posedness for energy sub-critical nonlinearities is proven by a pertur- +bative method in the spirit of Kato [38] adapted to the non-trivial farfield behavior. +Subsequently, we prove global well-posedness in Section 3.2. +3.1. Local well-posedness. +First, we provide necessary a priori bounds on the +nonlinearity N(ψ) in the Strichartz norms for ψ ∈ E(R2) that will follow from (2.32) +and (2.34). We notice that (q1, r1) = ( 2(α+1) +α +, 2(α+ 1)) is Strichartz admissible and +one has +(3.1) +(q′ +1, r′ +1) = +�2(α + 1) +α + 2 , 2(α + 1) +2α + 1 +� +. +We recall that the space N 0 is defined in (2.27). It suffices to consider positive +times of existence as the analogue statements for negative times follow from the +time reversal symmetry of (1.1). For ψ ∈ L∞([0, T ]; E(Rd)) we denote +(3.2) +ZT := ∥∇ψ∥L∞([0,T ];L2(R2)) + ∥|ψ| − 1∥L∞([0,T ];L2(R2)) +and note that ZT (ψ) ≤ 2 supt∈[0,T ] +� +E(ψ)(t). The quantity ZT (ψ) can be thought +of as analogue of the L∞ +t H1 +x−norm for nonlinear Schr¨odinger equations with van- +ishing conditions at infinity. + +20 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Lemma 3.1. Let the nonlinearity f be such that Assumption 1.1 is satisfied, T > 0, +the pair (q′ +1, r′ +1) as in (3.1) and ψ ∈ L∞([0, T ]; E(R2)), then the following hold +(3.3) +∥N(ψ)∥L1([0,T ];L2(R2)) ≤ CT +� +ZT (ψ) + ZT (ψ)1+2α� +, +and +(3.4) +∥∇N(ψ)∥N 0([0,T ]×R2) ≤ C +� +T + T +1 +q′ +1 ZT (ψ)2α +� +∥∇ψ∥L∞([0,T ];L2(R2). +Furthermore, given ψ ∈ L∞([0, T ]; E(R2)) and u, v ∈ L∞([0, T ]; H1(R2)), one has +that +(3.5) +∥N(ψ + u) − N(ψ + v)∥N 0([0,T ]×R2) +≤ C +� +T + T +1 +q′ +1 � +ZT (ψ + u)2α + ZT (ψ + v)2α� � +∥u − v∥L∞([0,T ];L2(R2). +Proof. Let ψ ∈ E(R2). To infer (3.3), we observe that (2.32) implies +∥N1(ψ)∥L1 +tL2 +x ≤ CT ∥|ψ| − 1∥L∞ +t L2x ≤ CT ZT(ψ). +To obtain the bound of N2(ψ), we note that the Chebychev inequality (2.8) yields +that supp(1 − η(ψ)) is of finite Lebesgue measure for all ψ ∈ E(R2). It follows then +from Lemma 2.1 and (2.32) that +∥N2,∞(ψ)∥L1 +tL2x ≤ CT L2 (supp(1 − η(|ψ|)) +1 +2 ≤ CT ZT(ψ). +By exploiting that supp(1 − η(ψ)) ⊂ supp(1 − χ(ψ)) for ψ ∈ E(R2) and by (2.8), +we bound the third contribution as +∥N2,q(ψ)∥L1 +tL2x ≤ C∥|ψ|2α|ψ|(1 − χ(ψ))∥L1 +tL2x ≤ CT ZT(ψ) + CT ∥ψq∥1+2α +L∞ +t L2(1+2α) +x +≤ CT +� +ZT (ψ) + ZT (ψ)1+2α� +, +where ψq is defined in (2.2), with χ given in (2.1). In the second last inequality, we +used that +(3.6) +|ψ|2α+1(1 − χ(ψ)) ≤ C +� +1{0<1−χ(ψ)≤1/4} + |ψq|2α+1� +, +and +L2 ({x ∈ supp(1 − χ(ψ)) : 0 < 1 − χ(ψ) ≤ 1/4}) ≤ ZT (ψ)2. +To control ∇N(ψ), we observe that by using (2.34) and decomposing ψ = ψ∞ +ψq, +see (2.2), it follows +∥∇N(ψ)∥ +L1 +tL2 +x+L +q′ +1 +t L +r′ +1 +x ≤ CT ∥∇ψ∥L∞ +t L2x + ∥|ψq|2α∇ψ∥ +L +q′ +1 +t L +r′ +1 +x +≤ C +� +T + T +1 +q′ +1 ZT (ψ)2α +� +∥∇ψ∥L∞ +t L2 +x. +It remains to show (3.5). Let ψ ∈ L∞([0, T ]; E(R2)) and u, v ∈ L∞([0, T ]; H1(R2)). +Then, (2.35) implies the pointwise bound +|N(ψ + u) − N(ψ + v)| ≤ C +� +1 + |ψ + u|2α + |ψ + v|2α� +|u − v|. +Exploiting that E(R2) + H1(R2) ⊂ E(R2) from Lemma 2.1, we proceed as before +to infer that for a.e. t ∈ [0, T ] it holds +∥|ψ + u|2α∥L∞ +x +Lq1 +x + ∥|ψ + v|2α∥L∞ +x +Lq1 +x ≤ C +� +1 + ZT (ψ + u)2α + ZT (ψ + v)2α� +. +It follows that + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +21 +∥N(ψ + u) − N(ψ + v)∥ +L1 +tL2x+L +q′ +1 +t L +r′ +1 +x +≤ C +� +T + T +1 +q1 +′ � +ZT(ψ + u)2α + ZT (ψ + v)2α�� +∥u − v∥L∞ +t L2x, +yielding (3.5). +□ +With the bounds of Lemma 3.1 and the Strichartz estimates of Lemma 2.14 at +hand, we are able to prove existence and uniqueness of solutions to (1.1). To that +end, we consider the equivalent Duhamel formula +(3.7) +ψ(t) = e +i +2 t∆ψ0 − i +� t +0 +e +i +2 (t−s)∆N(ψ)(s)ds +which is justified as identity in E(R3) in virtue of the properties of the free solutions +from Proposition 2.12 and the fact the non-homogeneous terms is bounded in L∞ +t H1 +x +by means of the Strichartz estimates (2.25) and Lemma 3.1. +We anticipate that the continuous dependence on the initial data differs sig- +nificantly from the classical approach as consequence of the low regularity of the +nonlinearity N combined with the lack of integrability of ψ. The constructed solu- +tions are such that ψ(t)−ψ0 ∈ H1(R2) for all t and hence (3.5) suffices to show local +existence. Note that in order to show the continuous dependence on the initial data +(3.5) is not sufficient as in general different initial data possesses different far-field +behavior, namely belongs to different connected components of E, see also Remark +2.4. Lemma 3.3 upgrades (3.5) to the respective inequality for general initial data. +The following Proposition is stated for positive existence times, the analogous +statement for negative times follows by exploiting the time reversal symmetry of +(1.1). +Proposition 3.2. Let d = 2 and f be such that Assumption 1.1 is satisfied. Then, +(1) for any ψ0 ∈ E(R2), there exists T = T (E(ψ0)) > 0 and a unique strong +solution ψ ∈ C([0, T ]; E(R2)) to (1.1) with ψ(0) = ψ0. In particular, ψ − +ψ0 ∈ C([0, T ]; H1(R2)); +(2) there exists a maximal existence time T ∗ = T ∗(ψ0) > 0, such that ψ ∈ +C([0, T ∗); E(R2)) and the blow-up alternative holds, namely if T ∗ < ∞ +then +lim +tրT ∗ E(ψ)(t) = +∞. +(3) for any ψ∗ +0 ∈ E(R2) there exists a open neighborhood O ⊂ E(R2) of ψ∗ +0 +such that +T ∗(O) = inf +ψ0∈O T ∗(ψ0) > 0, +and the map ψ0 ∈ O �→ ψ ∈ C([0, T ]; E(R2)) is continuous for all 0 < T < +T ∗(O). Moreover, let Or = {ψ0 ∈ E(R2) : dE(ψ∗ +0, ψ0) < r}, then +lim inf +r→0 T ∗(Or) ≥ T ∗(ψ∗ +0). +Point (1) of Proposition 3.2 is included in (2). Nevertheless, it is stated separately +as it proves useful for the proof of continuous dependence property in (3). +Proof. Local existence. We note that ψ ∈ C([0, T ]; E(R2)) is a strong solution +to (1.1) with initial data ψ0 ∈ E(R2) iff +ψ(t) = e +i +2 t∆ψ0 − i +� t +0 +e +i +2 (t−s)∆N(ψ)(s)ds + +22 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +for all t ∈ [0, T ]. +To show existence of a solution ψ it suffices to implement a +fixed-point argument for the solution map +(3.8) +Φ(u)(t) = i +� t +0 +e +i +2 (t−s)∆N(e +i +2 s∆ψ0 + u(s))ds. +Indeed, ψ(t) = e +i +2 t∆ψ0 + u(t) satisfies ψ ∈ C([0, T ]; E(R2)) if u ∈ XT and ψ0 ∈ +E(R2). It follows from Proposition 2.12 that e +i +2 t∆ψ0 ∈ C([0, T ]; E(R2)) and Lemma +2.1 yields that e +i +2 t∆ψ0 + u ∈ C([0, T ]; E(R2)). If u is a fixed-point of (4.2) then +ψ = e +i +2 t∆ψ0 + u is a local strong solution of (1.1). +Let ψ0 ∈ E and R > 0 such that E(ψ0) ≤ R and given M > 0 and T > 0, we +consider the solution map (3.8) defined on the function space +XT = +� +u ∈ C([0, T ]; H1(R2)) : u(0) = 0, +∥u∥XT ≤ M +� +. +For u, v ∈ XT , we introduce the distance function d as +dX(u, v) = ∥u − v∥L∞([0,T ];L2(R2)). +It is straightforward to verify that the space (XT , dX) is a complete metric space. +If E(ψ0) ≤ R and u ∈ XT , then thanks to the Minkowski inequality and (2.23) we +obtain +(3.9) +ZT (e +i +2 t∆ψ0 + u) ≤ ZT (e +i +2 t∆ψ0) + ∥u∥L∞([0,T ];H1(R2)) ≤ 2 +√ +2R + M, +provided that T > 0 sufficiently small. Next, we show that Φ defined in (3.8) maps +XT onto XT . Let u ∈ XT and denote ψ = e +i +2 t∆ψ0 + u, then by virtue of the +Strichartz estimate (2.25), (3.3) and (3.9) we obtain +(3.10) +∥Φ(u)∥L∞([0,T ];L2(R2)) ≤ ∥N(ψ)∥L1([0,T ];L2(R2)) +≤ CT +� +ZT (ψ) + ZT (ψ)1+2α� +≤ CT +� +1 + (2 +√ +2R + M)2α� +(2 +√ +2R + M). +To bound ∇Φ(u), we apply the Strichartz estimates (2.25) concatenated with +(3.4) to obtain +(3.11) +∥∇Φ(u)∥L∞([0,T ];L2(R2)) ≤ C∥∇N(ψ)∥N 0([0,T ]×R2) +≤ C +� +T + T +1 +q′ +1 ZT (ψ)2α +� +∥∇ψ∥L∞ +t L2x ≤ C +� +T + T +1 +q′ +1 (2 +√ +2R + M)2α +� +(2 +√ +2R + M). +We conclude that +Φ(u) ∈ C([0, T ]; H1(R2)), +and summing up (3.10), (3.11), we obtain that +∥Φ(u)∥XT ≤ C +� +T + T +1 +q′ +1 (2 +√ +2R + M)2α +� +(2 +√ +2R + M). +Next, we check that the map Φ defines a contraction on (XT , dX). Let u1, u2 ∈ XT +and denote +ψ1 = e +i +2 t∆ψ0 + u1, +ψ2 = e +i +2 t∆ψ0 + u2. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +23 +Upon applying (2.25) followed by (3.5) one has +dX(Φ(u1), Φ(u2)) = +����−i +� t +0 +e +i +2 (t−s)∆ (N(ψ1) − N(ψ2)) (s)dx +���� +L∞([0,T ],L2(R2)) +≤ C ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) +≤ C +� +T + T +1 +q′ +1 (2 +√ +2R + M)2α +� +dX(u1, u2). +We fix M = +√ +2R and notice that there exists 0 < T ≤ 1 sufficiently small such +that +C +� +T + T +1 +q′ +1 (3 +√ +2R)2α +� +≤ 1 +3. +Hence, Φ maps XT onto XT and defines a contraction on XT . The Banach fixed- +point Theorem yields a unique u ∈ XT such that e +i +2 t∆ψ0 + u is solution to (3.7). +It follows from Lemma 2.1 and (2.23) that e +i +2 t∆ψ0 + u ∈ C([0, T ]; E(R2)). +In +particular, ψ − ψ0 ∈ C([0, T ]; H1(R2)) from (2.21) and u ∈ XT . +Uniqueness. Let ψ1, ψ2 ∈ C([0, T ], E(R2)) be two solutions to (1.1) with initial +data ψ1(0) = ψ2(0) = ψ0 ∈ E(R2). One has that +(3.12) +ψ1(t) − ψ2(t) = −i +� t +0 +e +i +2 (t−s)∆ (N(ψ1) − N(ψ2)) (s)ds. +In particular, as the nonlinear terms are bounded in L∞ +t H1 +x(R2), one has ψ1 − +ψ2 ∈ L∞([0, T ]; H1(R2)). For (q′ +1, r′ +1) given by (3.1), the Strichartz estimate (2.25) +together with (3.5) then yields +∥ψ1 − ψ2∥L∞ +t L2x ≤ C∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) +≤ C +� +T + T +1 +q′ +1 � +ZT (ψ1)2α + ZT (ψ2)2α�� +∥ψ1 − ψ2∥L∞ +t L2x. +Hence, we deduce that there exists T1 > 0 such that ψ1 = ψ2 a.e. on [0, T1] × R2. +As T1 only depends on ZT (ψ1), ZT (ψ2), one may iterate the argument to obtain +uniqueness of the solution on the interval [0, T ]. +Blow-up alternative. Let ψ0 ∈ E(R2) and define +T ∗(ψ0) = sup {T > 0 : there exists a solution to (1.1) on [0, T ]} . +Let T ∗(ψ0) < +∞ and assume that there exist R > 0 and a sequence {tn}n∈N +such that tn → T ∗(ψ0) and E(ψ(tn)) ≤ R for all n ∈ N. +Then, there exists +n sufficiently large such that the local existence statement allows us to uniquely +extend the solution to [0, tn + T (R)] with tn + T (R) > T ∗(ψ0). This violates the +maximality assumption and we conclude that +E(ψ(tn)) → ∞, +as +tn → T ∗(ψ0), +if T ∗(ψ0) < +∞. +The proof of the continuous dependence on the initial data of the solution requires +some auxiliary statements and is postponed after Lemma 3.4. +□ +We introduce estimates on the nonlinear flow in Strichartz norms that are re- +quired for the proof of the continuous dependence on the initial data. The estimates +used for the local existence and uniqueness in the proof of Proposition 3.2 are not +sufficient since they only allow to control the difference of solutions ψ1, ψ2 provided +that ψ1 − ψ2 ∈ L∞([0, T ]; L2(R2)). In addition, as the regularity properties of N + +24 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +do not suffice to control ∥∇Φ(ψ1) − ∇Φ(ψ2)∥L∞ +t L2x for ψ1, ψ2 ∈ C([0, T ]; E(R2)), +we need to rely on a auxiliary metric. +Lemma 3.3. Let f satisfy Assumption 1.1, T > 0, (q′ +1, r′ +1) as defined in (3.1) and +ψ1, ψ2 ∈ C([0, T ]; E(R2)). Then, there exists θ ∈ (0, 1] such that +∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) +≤ CT θ � +1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� +× +� +∥|ψ1| − |ψ2|∥L2([0,T ]];L2(R2)) + ∥ψ1 − ψ2∥L2([0,T ];L∞+L2(R2)) +� +. +Proof. First, we notice that it follows from the first inequality of (2.36) and the +decomposition provided by Lemma 2.1 that +∥N1(ψ1) − N1(ψ2)∥ +L1 +tL2 +x+L +4 +3 +t L +4 +3 +x ≤ C +� +T +1 +2 + T +1 +4 ZT (ψ1) +� +∥|ψ1| − |ψ2|∥L2 +tL2x ++ CT +1 +2 (1 + ZT (ψ2))∥ψ1 − ψ2∥L∞ +t (L∞ +x +L2x), +where we used that ||ψ2| − 1|η(|ψ2|) ∈ L∞([0, T ]; L∞(R2) ∩ L2(R2)). Indeed, let +Ω ⊂ R2 of finite Lebesgue measure and f ∈ L∞(Ω) + Lp(Ω), then +∥f∥Lp(Ω) ≤ C +� +1 + L2(Ω) +1 +p +� +∥f∥Lp(Ω)+L∞(Ω). +Second, we observe that L2(supp(N2(ψi))) ≤ E(ψi) for i = 1, 2 from (2.8). From +(2.36), we conclude +∥N2,∞(ψ1) − N2,∞(ψ2)∥L1 +tL2x ≤ CT (1 + ZT (ψ1) + ZT (ψ2)) ∥ψ1 − ψ2∥L∞ +t (L∞ +x +L2 +x) . +Third, arguing as in the proof of Lemma 3.1 and exploiting that L2(supp(N2(ψi))) ≤ +E(ψi) we obtain +∥N2,q(ψ1)−N2,q(ψ2)∥ +L1 +tL2x+L +q′ +1 +t L +r′ +1 +x ≤ +��1supp(1−χ(ψ1))∪supp(1−χ(ψ2))|ψ1 − ψ2| +�� +L1 +tL2 +x ++ +��� +|ψ1,q|2α + |ψ2,q|2α� +|ψ1 − ψ2| +�� +L +q′ +1 +t L +r′ +1 +x +≤ C(T +T +1 +q′ +1 ) +� +ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� +∥ψ1−ψ2∥L∞ +t (L∞ +x +L2x). +□ +Concatenating the Strichartz estimates (2.25) and Lemma 3.3 gives the following. +Lemma 3.4. Given ψ1, ψ2 ∈ C([0, T ]; E(R2)) such that ZT (ψi) ≤ M for i = 1, 2, +there exist C = C(M) > 0 and θ ∈ (0, 1] such that +(3.13) +∥Φ(ψ1)−Φ(ψ2)∥S0([0,T ]×R2) ≤ CMT θ � +∥ψ1 − ψ2∥L∞ +t (L∞ +x +L2x) + ∥|ψ1| − |ψ2|∥L2 +tL2x +� +. +We are now in position to complete the proof of Proposition 3.2. Note that the +metric space (E, dE) is not separable, see also Remark 2.7. In particular, it is not +sufficient to show sequential continuity of the solution map. +Proof of Proposition 3.2 continued. We prove continuous dependence on the +initial data. Given ψ∗ +0 ∈ E(R2), let R := E(ψ∗ +0) and r ∈ (0, +√ +R]. Denote +(3.14) +Or := {ψ0 ∈ E(R2) : dE(ψ∗ +0, ψ0) < r}. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +25 +If follows that E(ψ0) ≤ 4E(ψ∗ +0) for all ψ0 ∈ Or. The first statement of Proposition +3.2 then yields that there exists T = T (4E(ψ∗ +0)) > 0 such that for all ψ0 ∈ Or there +exists a unique strong solution ψ ∈ C([0, T ]; E(R2)). In particular, for ψ0 ∈ Or the +maximal time satisfies +T ∗(ψ0) ≥ T (4E(ψ∗ +0)) > 0 +by virtue of the blow-up alternative. Hence, +T ∗(Or) = +inf +ψ0∈Or T ∗(ψ0) ≥ T (4E(ψ∗ +0)) > 0. +Given δ > 0 to be chosen later, let Oδ be defined as in (3.14). Let us remark +(again) that, for any ψ0 ∈ Oδ, we have E(ψ0) ≤ 2(R + δ2). In particular, T ∗(ψ0) ≥ +T ∗(Oδ) > 0. +Let ψ1 +0, ψ2 +0 ∈ Oδ and denote by ψ1, ψ2 the respective solutions defined at least +up to time T ∗(Oδ). For any 0 < T < T ∗(Oδ) there exists M = M(T ) > 0 such +that +ZT (ψ1) + ZT (ψ2) ≤ M, +by virtue of the blow-up alternative. From (2.22), we have that there exists C = +C(R, δ, T ) > 0 such that +(3.15) +sup +t∈[0,T ] +dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) ≤ CdE(ψ1 +0, ψ2 +0) ≤ 2Cδ. +To prove continuous dependence of the solution, we proceed in the following four +steps that are necessary in order to compensate for the lack of Lipschitz regularity +of ∇N. +(1) There exist C > 0 and 0 < T1 < T ∗(Oδ), only depending on M such that +(3.16) +∥ψ1 − ψ2∥L∞([0,T1];L∞+L2(R2)) + ∥|ψ1| − |ψ2|∥L2([0,T1];L2(R2)) +≤ CdE(ψ1 +0, ψ2 +0). +(2) Provided (3.16) holds, for all ε > 0 there exist T2 = T2(M) > 0 and δ > 0 +such that dE(ψ1 +0, ψ2 +0) < δ implies +(3.17) +∥∇ψ1 − ∇ψ2∥L∞([0,T2];L2(R2)) < ε. +(3) Provided (3.16) and (3.17) hold, for all ε > 0 there exists δ > 0 such that +dE(ψ1 +0, ψ2 +0) < δ implies +(3.18) +sup +t∈[0,T2] +dE(ψ1(t), ψ2(t)) < ε. +(4) The estimate (3.18) implies that for all 0 < T < T ∗(Oδ) and ε > 0, there +exists δ > 0 such that dE(ψ1 +0, ψ2 +0) < δ yields +(3.19) +sup +t∈[0,T ] +dE(ψ1(t), ψ2(t)) < ε, +Step 1 We show (3.16). Let us consider the first term on the left hand side of +(3.16), by using (3.15) and from Lemma 3.4, we know there exists θ > 0 such that +(3.20) +∥ψ1 − ψ2∥L∞([0,T ];L∞+L2(R2)) +≤ ∥e +i +2 t∆ψ1 +0 − e +i +2 t∆ψ2 +0∥L∞([0,T ];L∞+L2(R2)) + ∥Φ(ψ1) − Φ(ψ2)∥L∞([0,T ],L2(R2)) +≤ CdE(ψ1 +0, ψ2 +0) + CMT θ � +∥ψ1 − ψ2∥L∞([0,T ];(L∞+L2(R2)) + ∥|ψ1| − |ψ2|∥L2([0,T ];L2(R2) +� +. + +26 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Given χ defined in (2.1), we define χ6(z) := χ(6z). Arguing as in the proof of +Lemma 2.6 we notice that +(3.21) +∥|ψ1| − |ψ2|∥L2([0,T ];L2(R2)) +≤ +��|ψ1|2 − |ψ2|2�� +L2([0,T ];L2(R2)) + ∥ψ1χ6(ψ1) − ψ2χ6(ψ2)∥L2([0,T ];L2(R2)) +To deal with the first contribution on the right-hand side, we notice that +��|ψ1|2 − |ψ2|2�� ≤ +���|e +i +2 t∆ψ1 +0|2 − |e +i +2 t∆ψ2 +0|2��� + +���2 Re +� +e− i +2 t∆ψ2 +0 (Φ(ψ2) − Φ(ψ1)) +���� ++ +���2 Re +� +e− i +2 t∆(ψ2 +0 − ψ1 +0)Φ(ψ1) +���� + (|Φ(ψ1)| + |Φ(ψ2)|) |Φ(ψ1) − Φ(ψ2)| . +We control these four terms separately. First, from (3.15), one has that +���|e +i +2 t∆ψ1 +0|2 − |e +i +2 t∆ψ2 +0|2��� +L2 +tL2x +≤ CT +1 +2 dE(ψ1 +0, ψ2 +0). +Second, upon splitting e +i +2 t∆ψi +0 ∈ E(R2) as in (2.2) we have +���2 Re +� +e− i +2 t∆ψ2 +0 (Φ(ψ2) − Φ(ψ1)) +���� +L2 +tL2x +≤ T +1 +2 ∥Φ(ψ2) − Φ(ψ1)∥L∞ +t L2x + T +1 +4 ZT (e +i +2 t∆ψ2 +0)∥Φ(ψ2) − Φ(ψ1)∥L4 +tL4x +≤ CM +� +T +1 +2 ∥Φ(ψ2) − Φ(ψ1)∥L∞ +t L2x + T +1 +4 ∥Φ(ψ1) − Φ(ψ2)∥L4 +tL4 +x +� +. +Third, proceeding similarly and exploiting (3.15) we have +���2 Re +� +e− i +2 t∆(ψ2 +0 − ψ1 +0)Φ(ψ1) +���� +L2 +tL2x +≤ C +� +T +1 +2 ∥Φ(ψ1)∥L∞ +t L2x + T +1 +4 ∥Φ(ψ2)∥L4 +tL4 +x +� +dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) +≤ C +� +T +1 +2 ∥Φ(ψ1)∥L∞ +t L2x + T +1 +4 ∥Φ(ψ1)∥L4 +tL4x +� +dE(ψ1 +0, ψ2 +0) +≤ C(T +1 +2 + T +1 +4 )(M + M 1+2α)dE(ψ1 +0, ψ2 +0), +where we used that Φ(ψ1) ∈ L∞([0, T ]; L2(R2)) ∩ L4([0, T ]; L4(R2)) from (2.25). +Fourth, one has +∥ (|Φ(ψ1)| + |Φ(ψ2)|) |Φ(ψ2) − Φ(ψ1)| ∥L2 +tL2x +≤ +� +∥Φ(ψ1)∥L4 +tL4x + ∥Φ(ψ2)∥L4 +tL4x +� +∥Φ(ψ1) − Φ(ψ2)∥L4 +tL4x +≤ CT +� +M + M 1+2α� +∥Φ(ψ1) − Φ(ψ2)∥L4 +tL4x, +where we used (3.3) in the last inequality. Combining the previous inequalities, we +infer that there exists θ1 > 0 +(3.22) +��|ψ1|2 − |ψ2|2�� +L2 +tL2x ≤ CT θ1 � +1 + M + M 1+2α� +× +� +dE(ψ1 +0, ψ2 +0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ +t L2x + ∥Φ(ψ1) − Φ(ψ2)∥L4 +tL4 +x +� +. +The second contribution in (3.21) is bounded as follows +(3.23) +∥ψ1χ6(ψ1) − ψ2χ6(ψ2)∥L2 +tL2x +≤ CT +1 +2 (1 + M) dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) + CT +1 +2 ∥Φ(ψ1) − Φ(ψ2)∥L∞ +t L2 +x +≤ CT +1 +2 (1 + M) +� +dE(ψ1 +0, ψ2 +0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ +t L2x +� +, + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +27 +where we exploited that for ψ ∈ E(R2) the measure of the support of χ6(ψ) is +bounded by E(ψ), see (2.8). It follows from (3.21), (3.22) and (3.23) that there +exists θ2 > 0 such that +(3.24) +∥|ψ1| − |ψ2|∥L2([0,T ];L2(R2)) ≤ CT θ2 � +1 + M + M 1+2α� +× +� +dE(ψ1 +0, ψ2 +0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞L2 + ∥Φ(ψ1) − Φ(ψ2)∥L4 +tL4 +x +� +. +Summing up (3.20) and (3.24) and applying (3.13) yields that there exists θ > 0 +such that +∥ψ1 − ψ2∥L∞ +t (L∞ +x +L2 +x) + ∥|ψ1| − |ψ2|∥L2 +tL2 +x ≤ CMT θ +× +� +dE(ψ1 +0, ψ2 +0) + CMT θ � +∥ψ1 − ψ2∥L∞ +t (L∞ +x +L2x) + ∥|ψ1| − |ψ2|∥L2 +tL2x +�� +. +For T1 > 0 sufficiently small, only depending on M, inequality (3.16) follows. +Step 2. Note that +∇ψ1 − ∇ψ2 = e +i +2 t∆ � +∇ψ1 +0 − ∇ψ2 +0 +� +− i +� t +0 +e +i +2 (t−s)∆ (∇N(ψ1) − ∇N(ψ2)) (s)ds. +We estimate the difference of the free solutions by +(3.25) +���e +i +2 t∆ � +∇ψ1 +0 − ∇ψ2 +0 +���� +L∞([0,T ],L2(R2)) ≤ dE(ψ1 +0, ψ2 +0), +exploiting that e +i +2 t∆ is an isometry on L2(R2). We recall from (2.33) that +∇N(ψ) = +� +f(|ψ|2) + f ′(|ψ|2)|ψ|2� +∇ψ + f ′(|ψ|2)ψ2∇ψ, +which can be bounded by means of (2.34) as +|∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ| ≤ C(1 + |ψq|2α)|∇ψ|. +We apply estimate (2.25) to the non-homogeneous term, where (q1, r1)) = ( 2(α+1) +α +, 2(α+ +1)), see also (3.1). We decompose ∇N(ψ1) − ∇N(ψ2) by means of the functions +G∞, Gq defined in (2.38) leading to +(3.26) +����i +� t +0 +e +i +2 (t−s)∆ (∇N(ψ1) − ∇N(ψ2)) (s)ds +���� +L∞([0,T ];L2(R2)) +≤ ∥(G∞ + Gq)(ψ2) |∇ψ1 − ∇ψ2|∥N 0 ++ ∥((G∞ + Gq)(ψ1) − (G∞ + Gq)(ψ2)) |∇ψ1|∥N 0([0,T ]×R2)) +≤ ∥∇ψ1 − ∇ψ2∥L1 +tL2x + ∥|ψ2,q|2α |∇ψ1 − ∇ψ2| ∥ +L +q′ +1 +t L +r′ +1 +x ++ ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 +tL2 +x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ +L +q′ +1 +t L +r′ +1 +x +≤ C +� +T + T +1 +q′ +1 ZT (ψ1)2α) +� +∥∇ψ1 − ∇ψ2∥L∞ +t L2x ++ ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 +tL2 +x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ +L +q′ +1 +t L +r′ +1 +x +Thus, for T2 = T2(M) > 0 sufficiently small so that +C +� +T2 + T +1 +q′ +2 ZT (ψ2)2α +� +≤ C +� +T2 + T +1 +q′ +2 M 2α +� +≤ 1 +2, +we conclude by combining (3.25) and (3.26) that + +28 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +∥∇ψ1 − ∇ψ2∥L∞([0,T2],L2(R2)) ≤ dE(ψ1 +0, ψ2 +0) ++ ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 +tL2x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ +L +q′ +1 +t L +r′ +1 +x . +In order to conclude Step 2, we need to show that the second line above can be +made arbitrarily small by choosing a sufficiently small δ > 0. +We proceed by +contradiction, assuming that there exist ε > 0, a sequence {δn}n∈N and {ψn +0 }n∈N ⊂ +E(R2) such that dE(ψ1 +0, ψn +0 ) < δn → 0 and for all n sufficiently large, +(3.27) ∥(G∞(ψ1) − G∞(ψn)) |∇ψ1|∥L1 +tL2x+∥(Gq(ψ1) − Gq(ψn)) |∇ψ1|∥ +L +q′ +1 +t L +r′ +1 +x ≥ ε, +where ψn ∈ C([0, T ]; E(R2)) denotes the unique maximal solution with ψn(0) = ψn +0 . +Inequality (3.16) implies that, up to extracting a subsequence, not relabeled, ψn +converges to ψ1 a.e. on [0, T1] × R2. If 0 < T1 < T2, then set T2 := T1. By virtue +of Assumption 1.1 on f, it follows that G∞, Gq are continuous and thus +|(G∞(ψ1) − G∞(ψn))| |∇ψ1| → 0 +a.e. in +[0, T2] × R2, +|Gq(ψ1) − Gq(ψn)| |∇ψ1| → 0 +a.e. in +[0, T2] × R2. +Since in addition one has +∥Gq(ψn)∥L∞ +t Lq1 +x (R2) ≤ C +��(1 + |ψq,n|2α)(1 − χ(ψn) +�� +L∞ +t Lq1 +x (R2) +≤ C +� +ZT (ψn) + ZT (ψn)2α� +≤ C +� +M + M 2α� +for all n ∈ N, we obtain from (3.16) that there exists φ ∈ L∞([0, T ]; Lr1(R2)) such +that |ψq,n| ≤ φ a.e. on [0, T2) × R2. Therefore, +|(G∞(ψ1) − G∞(ψn))| |∇ψ1| ≤ C|∇ψ1| ∈ L1([0, T ); L2(R2)), +|(Gq(ψ1) − Gq(ψn))| |∇ψ1| ≤ C +� +|ψ1|2α + |φ|2α� +|∇ψ1| ∈ Lq′ +1([0, T ); Lr′ +1(R2)), +so that the dominated convergence Theorem then implies that (3.27) is violated. +The inequality (3.17) follows for the time interval [0, T2] where we stress that T2 > 0 +only depends on M. +Step 3. Given that (3.16) and (3.17) are satisfied, it suffices to prove that, for any +ε > 0, there exists δ > 0 such that dE(ψ1 +0, ψ2 +0) < δ implies +∥|ψ1| − |ψ2|∥L∞([0,T2];L2(R2)) < ε. +Note that (3.16) only yields +∥|ψ1| − |ψ2|∥L2([0,T2];L2(R2)) < Cδ. +We recall that ψi(t) = e +i +2 t∆ψi +0 + Φ(ψi), where e +i +2 t∆ψi +0 ∈ C([0, T ]; E(R2)) and +Φ(ψi) ∈ C([0, T ]; H1(R2)) for i = 1, 2. More precisely, ZT (e +i +2 t∆ψi +0) ≤ 2 +√ +2 +� +E(ψ0 +i ). +It follows from (2.14) that +∥|ψ1| − |ψ2|∥L∞ +t L2x ≤ C +� +1 + +� +E(ψ1 +0) + +� +E(ψ2 +0) + ∥Φ(ψ1)∥L∞ +t H1x + ∥Φ(ψ2)∥L∞ +t H1x +� +× +� +dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ +t H1x +� +≤ C(1 + 2 +√ +R + δ + 2M + 2M 1+2α) +� +dE(ψ1 +0, ψ2 +0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ +t H1 +x +� +, +where we used (2.22) in the last inequality. We are left to show that for all ε > 0 +there exists δ > 0 such that dE(ψ∗ +0, ψ0) < δ yields +∥Φ(ψ1) − Φ(ψ2)∥L∞ +t H1x < ε. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +29 +The statement follows by combining (3.13), and (3.16) and observing that +∥∇Φ(ψ1) − ∇Φ(ψ2)∥L∞ +t L2x ≤ ∥∇ψ1 − ∇ψ2∥L∞ +t L2x + +sup +t∈[0,T2] +dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) +followed by (3.17) and (3.15). This completes Step 3. +Step 4: Note that Step 3 yields continuous dependence on the initial data w.r.t. +to the topology of E induced by the metric dE on a time interval [0, T2] where T2 only +depends on M. One may hence cover [0, T ] by the union of intervals [tk, tk+1] with +tk = kT2 for k ∈ {0, ..., N −1} with N = ⌈ T +T2 ⌉ finite. For all ε > 0, there exists δN > +0 such that dE(ψ1(tN−1), ψ2(tN−1)) < δN yields supt∈[tN−1,T ] dE(ψ1(t), ψ2(t)) < +ε. Next, there exists δN−1 > 0 such that dE(ψ1(tN−2), ψ2(tN−2)) < δN−1 yields +supt∈[tN−2,tN−1] dE(ψ1(t), ψ2(t)) < δN. One may then iterate the scheme finitely +many times in order to recover δ = δ1 > 0 such that dE(ψ0 +1, ψ0 +2) < δ implies +supt∈[0,T ] dE(ψ1(t), ψ2(t)) < ε. +It remains to show that for Or = {ψ0 ∈ E(R2) : dE(ψ∗ +0, ψ0) < r} it holds +lim inf +r→0 T ∗(Or) ≥ T ∗(ψ∗ +0). +This property is an immediate consequence of Step 4. The proof of Proposition 3.2 +is complete. +□ +We proceed to show a persistence of regularity property for (1.1) under the gen- +eral Assumption 1.1. Subsequently, we prove the conservation of the Hamiltonian +energy H. +Lemma 3.5. Let f be as in Assumption 1.1 and ψ0 ∈ E(R2) such that ∆ψ0 ∈ +L2(R2). Then, the unique maximal solution ψ ∈ C([0, T ∗); E(R2)) to (1.1) satisfies +∆ψ ∈ C([0, T ∗); L2(R2)), +∂tψ ∈ C([0, T ∗); L2(R2)). +Furthermore, the Hamiltonian is conserved, namely +H(ψ(t)) = H(ψ0), +for all t ∈ [0, T ∗). +Proof. Let ψ0 ∈ E(R2) such that ∆ψ0 ∈ L2(R2). Proposition 3.2 provides a T ∗ > 0 +such that there exists a unique maximal strong solution ψ ∈ C([0, T ∗); E(R2)) to +(1.1) with initial data ψ(0) = ψ0. +The blow-up alternative yields that for any +T ∈ [0, T ∗) there exists M > 0 such that ZT ≤ M, defined in (3.2). +First, we show that there exists T1 ∈ (0, T ] only depending on ZT (ψ) such that +∂tψ ∈ C([0, T1]; L2(R2)). Exploiting that ψ ∈ C([0, T ]; E(R2)) we obtain +i∂tψ(0) = −1 +2∆ψ0 + N(ψ0). +We claim that ∂tψ(0) ∈ L2(R2). We note that ∆ψ0 ∈ L2(R2) by assumption yields +ψ0 ∈ X2 + H2(R2) ⊂ X2(R2) ⊂ L∞(R2). It follows from (3.3) that +∥N(ψ0)∥L2(R2) ≤ C +�� +E(ψ0) + E(ψ0) +1 +2 +α� +. +By differentiating the Duhamel formula (3.7) in time and applying Corollary 2.13 +one has +∂tψ(t) = e +i +2 t∆ +� i +2∆ψ(0) − iN(ψ)(0) +� +− i +� t +0 +e +i +2 s∆∂tN(ψ)(t − s)ds + +30 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI += e +i +2 t∆(∂tψ(0)) + +� t +0 +e +i +2 (t−s)∆ � +G1(ψ)∂tψ + G2(ψ)∂tψ +� +(s)ds, +where G1, G2 are as defined in (2.37). Hence, +∥∂tψ∥L∞([0,T ];L2(R2)) ≤ ∥∂tψ(0)∥L2(R2)+∥G1(ψ)∂tψ+G2(ψ)∂tψ(∂tψ)∥N 0([0,T ]×R2). +Upon exploiting the estimates (2.39) on G1, G2 and following the lines of the proof +of Lemma 3.1, we conclude that +��G1(ψ)∂tψ + G2(ψ)∂tψ +�� +N 0([0,T ]×R2) ≤ C∥G∞(ψ)|∂tψ|∥L1 +tL2x + ∥Gq(ψ)|∂tψ|∥N 0 +≤ C∥∂tψ∥L1 +tL2x+∥ +� +1 + |ψ|2α� +|∂tψ|∥N 0 ≤ CT ∥∂tψ∥L∞ +t L2x+T +1 +q′ +1 ZT (ψ)2α∥∂tψ∥L∞ +t L2x. +Thus, there exists 0 < T1 < T only depending on ZT (ψ) such that +� +T1 + T +1 +q′ +1 +� � +1 + ZT (ψ)2α� +< 1 +2, +and +∥∂tψ∥L∞([0,T1];L2(R2)) ≤ 2∥∂tψ(0)∥L2(R2). +Second, we deduce a space-time bound for ∆ψ. More precisely, +∥∆ψ∥L∞([0,T1];L2(R2)) ≤ ∥∂tψ∥L∞([0,T1];L2(R2)) + ∥N(ψ)∥L∞([0,T1];L2(R2)) +≤ ∥∂tψ∥L∞([0,T1];L2(R2)) + +� +T1 + T +1 +q′ +1 +1 +� � +ZT (ψ) + ZT (ψ)2α� +, +by virtue of (3.3). As ∂tψ ∈ C([0, T1]; L2(R2)) it then follows ∆ψ ∈ C([0, T1]; L2(R2)). +Third, we show that H(ψ(t)) = H(ψ0) for all t ∈ [0, T1]. To that end, we compute +the L2-scalar product of (1.1) with ∂tψ and take the real part to infer +0 = Re ⟨i∂tψ, ∂tψ⟩ = Re +� +−1 +2∆ψ + N(ψ), ∂tψ +� +, +for any t ∈ [0, T1]. We notice that all terms are well-defined andconclude that for +all t ∈ [0, T1] the Hamiltonian energy is conserved, namely +0 = d +dt +� +Rd +1 +2|∇ψ|2 + F(|ψ|2)dx. +As T1 > 0 only depends on ZT(ψ), the procedure above may be implemented +starting from any t0 ∈ [0, T − T1] covering the time interval [0, T ] by finitely many +sub-intervals. It follows that H(ψ) is constant in time on each of them. +Since +ψ ∈ C([0, T ]; E(R2)), by continuity one concludes that H(ψ)(t) = H(ψ0) for all +t ∈ [0, T ]. +□ +The results of this Section then yield the proof of Theorem 1.4 for d = 2. +Proof of Theorem 1.4 in 2D. For d = 2, the first three statements follow from +Proposition 3.2, while the fourth and fifth are provided by Lemma 3.5. +□ + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +31 +3.2. Global well-posedness. +Assuming the nonlinear potential in (1.3) to be +non-negative, we show that the Cauchy problem associated to (1.1) is globally well- +posed in the space E(R2) which completes the proof of Theorem 1.5 for d = 2. +First, we show that the regular solutions provided by Lemma 3.5 are global. +Corollary 3.6. Under the same assumptions of Lemma 3.5, let in addition the +nonlinear potential F, defined in (1.3) be non-negative, namely F ≥ 0. Then, the +solution constructed in Lemma 3.5 is global, i.e. T ∗ = +∞. +Proof. Let ψ ∈ C(0, T ∗; E(R2)) denote the unique maximal solution to (1.1) with +initial data ψ(0) = ψ0 ∈ E(R2). Since H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗) it follows +from Lemma 2.8 that there exists an increasing function g : (0, ∞) → (0, ∞) with +lim +r→0 g(r) = 0 such that +(3.28) +E(ψ)(t) ≤ g (H(ψ)(t)) = g (H(ψ)(0)) = g (H(ψ0)) < +∞ +for all t ∈ [0, T ∗). The blow-up alternative then yields that T ∗ = +∞. In addition, +ψ enjoys the bounds ∂tψ ∈ C([0, T ]; L2(R2)) and ∆ψ ∈ C([0, T ]; L2(R2)) for any +T > 0 as well as H(ψ(t)) = H(ψ0) for all t ∈ [0, ∞). +□ +Second, we prove Theorem 1.5 for d = 2. More precisely, by exploiting continuous +dependence on the initial data we show that the Hamiltonian energy is conserved +for solutions in the energy space and deduce global existence. +Proof of Theorem 1.5. Note that to complete the proof of the theorem it suffices to +prove that the Hamiltonian energy is conserved for all solutions ψ ∈ C([0, T ∗); E(R2)). +Global existence then follows by arguing as in the proof of Corollary 3.6. To that +end, given initial data ψ0 ∈ E(R3) and the unique solution ψ ∈ C([0, T ∗); E(R2)) +to (1.1) such that ψ(0) = ψ0, we observe that thanks to Lemma 2.10 there exists +{ψn +0 } ⊂ E(R2) ∩ C∞(R2) such that ∆ψn +0 ∈ L2(R2) and dE(ψ0, ψn +0 ) converges to +0 as n goes to infinity. Lemma 3.5 provides a sequence of unique global solutions +ψn ∈ C(R, E(R2)) such that H(ψn)(t) = H(ψn +0 ) for all n. Relying on the contin- +uous dependence on the initial data, we conclude that for any 0 < T < T ∗ one +has +sup +t∈[0,T ] +dE(ψ(t), ψn(t)) → 0 +as +n → ∞. +Hence, E(ψn)(t) → E(ψ(t)) for all t ∈ [0, T ]. Similarly, conservation of the Hamil- +tonian energy H(ψ) follows from H(ψn)(t) → H(ψ)(t) for all t ∈ [0, T ]. In particu- +lar, Lemma 2.8 yields an increasing function g : (0, ∞) → (0, ∞) with lim +r→0 g(r) = 0 +such that +E(ψ)(t) ≤ 2E(ψn)(t) ≤ 2g (H(ψn)(t)) = 2g (H(ψn +0 )) ≤ C, +for all t ∈ [0, T ] and n sufficiently large. By means of the blow-up alternative we +conclude that the solution is global, namely ψ ∈ C(R, E(R2)). +□ +4. 3D well-posedness +The approach to prove well-posedness for d = 3 differs from the one for d = 2 +in two aspects. First, we need to exploit that the nonlinear flow belongs to the +full range of Strichartz spaces S1([0, T ] × R3)), defined in (2.28). In particular, +exploiting also (2.29) we use that ∇ψ ∈ Lq([0, T ]; Lr(R3)) for some r > 2. For + +32 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +d = 3, it is not sufficient to work in L2-based function spaces - at least for super- +cubic nonlinearities. Second, Proposition 2.2 yields an affine structure for the energy +space E(R3). This allows for several simplifications of the well-posedness proofs +compared to Proposition 3.2. In this section, let +(4.1) +(q, r) = +�4(α + 1) +3α +, 2(α + 1) +� +and note that (q, r) is Schr¨odinger admissible. We recall that the Strichartz spaces +N 0 and N 1 are defined in (2.27) and (2.28) respectively and the quantity ZT (ψ) in +(3.2). +Proposition 4.1. Let d = 3 and f be such that Assumption 1.1 is satisfied. Then, +(1) for any ψ0 ∈ E(R3) there exists a maximal existence time T ∗ = T ∗(ψ0) > 0 +and a unique maximal solution ψ ∈ C([0, T ∗); E(R3)) of (1.1). The blow-up +alternative holds, namely if T ∗ < ∞ then +lim +tրT ∗ E(ψ)(t) = +∞. +(2) for any 0 < T < T ∗(ψ0) it holds +ψ − ψ0 ∈ C([0, T ]; H1(R3)), +∇ψ ∈ S0([0, T ] × R3)). +Moreover, the nonlinear flow satisfies +ψ(t) − e +i +2 t∆ψ0 ∈ C([0, T ]; H1(R3)) ∩ S1([0, T ] × R3). +(3) the solution depends continuously on the initial data, namely if {ψn +0 }n∈N ⊂ +E(R3) is such that dE(ψn +0 , ψ0) → 0 then for any 0 < T < T ∗(ψ0) it holds +that supt∈[0,T ∗) dE(ψn(t), ψ(t)) → 0, where ψn denotes the unique local so- +lution such that ψn(0) = ψn +0 . +The affine structure of the energy space, see Proposition 2.2 allows one to reduce +the wellposedness of Cauchy Problem for (1.1) to the wellposedness of an affine +problem in Fc(R3), see Lemma 4.2 and Remark 4.3 below. +However, we only +exploit this property for the proof of the continuous dependence on the initial data. +Note that due to the affine structure it suffices to show sequential continuity. +Proof. To show existence of a local strong solution ψ, it suffices to implement a +fixed-point argument for the map +(4.2) +Φ(u)(t) = i +� t +0 +e +i +2 (t−s) ∆N(ei s +2 ∆ψ0 + u(s))ds. +Indeed, if u ∈ C([0, T ]; H1(R3)) is a fixed-point of (4.2) then ψ(t) = e +i +2 t∆ψ0 + u(t) +is such that ψ ∈ C([0, T ]; E(R3)) due to Lemma 2.1 and ψ is a local strong solution +of (1.1). +Local existence Fixed (q, r) as in (4.1), we implement a fixed-point argument +for (4.2) in +XT = {u ∈ C([0, T ]; H1(R3)) ∩ Lq([0, T ]; W 1,r(R3)), +u(0) = 0, ∥u∥XT ≤ M} +with +∥ · ∥XT = ∥ · ∥L∞([0,T ];H1(R3)) + ∥ · ∥Lq([0,T ];W 1,r(R3)). +Equipped with the distance function +dX(u, v) = ∥u − v∥L∞([0,T ];L2(R3)) + ∥u − v∥Lq([0,T ];Lr(R3)), + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +33 +the space (XT , d) is a complete metric space. Let ψ0 ∈ E(R3) with E(ψ0) ≤ R, +where M > 0 and 0 < T ≤ 1 are to be fixed later. First, we verify that Φ : XT → +XT . To that end, we recall that for T = T (R) > 0 sufficiently small +ZT (e +i +2 t∆ψ0 + u) ≤ ZT (e +i +2 t∆) + ∥u∥H1(R2) ≤ 2 +� +2E(ψ0) + M ≤ 2 +√ +2R + M, +where ZT is defined in (3.2) and (2.3) and (2.23) have been applied in the first and +second inequality respectively. It follows from (2.25) that +∥Φ(u)(t)∥L∞ +t L2x + ∥Φ(u)(t)∥Lq +tLr +x ≤ 2∥N(e +i +2 t∆ψ0 + u)∥N 0. +Defining N1, N2 as in (2.30) and exploiting the pointwise bounds (2.32), we infer +���N1(e +i +2 t∆ψ0 + u) +��� +L1 +tL2x +≤ CT ZT(e +i +2 t∆ψ0 + u) ≤ CT +� +2 +√ +2R + M +� +. +Next, using again (2.32) and the Chebychev inequality (2.8) one has +���N2,∞(e +i +2 t∆ψ0 + u) +��� +L1 +tL2x +≤ CT L3 � +supp(1 − η(e +i +2 t∆ψ0 + u)) +� 1 +2 ≤ CT +� +2 +√ +2R + M +� +and +���N2,q(e +i +2 t∆ψ0 + u) +��� +L1 +tL2x+Lq′ +t Lr′ +x +≤ +���(1 + |e +i +2 t∆ψ0 + u|2α)|e +i +2 t∆ψ0 + u|(1 − χ(e +i +2 t∆ψ0 + u)) +��� +L1 +tL2x+Lq′ +t Lr′ +x +≤ CT (2 +√ +2R + M) + +���� +��� +� +e +i +2 t∆ψ0 + u +� +(1 − χ(e +i +2 t∆ψ0 + u)) +��� +2α+1���� +Lq′ +t Lr′ +x +≤ CT (2 +√ +2R + M) + CT +q−q′ +qq′ ���(e +i +2 t∆ψ0 + u)q +��� +2α +L∞Lr +���(e +i +2 t∆ψ0 + u)q +��� +Lq +t Lrx +≤ C +� +T + T +q−q′ +qq′ � +2 +√ +2R + M +�2α� � +2 +√ +2R + M +� +. +Moreover, Assumption 1.1, see also (2.33), imply the bound +|∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ|, +which allows one to infer that +���∇N1(e +i +2 t∆ψ0 + u) + ∇N2,∞(e +i +2 t∆ψ0 + u) +��� +L1 +tL2x +≤ CT +� +∥∇ψ0∥L∞ +t L2 +x + ∥∇u∥L∞ +t L2 +x +� +≤ CT +� +2 +√ +2R + M +� +. +To control ∇N2,q, note that e +i +2 t∆∇ψ0 ∈ Lq([0, T ]; Lr(R3)) for any admissible pair +(q, r) from Lemma 2.14 and E(ψ0) ≤ R. Therefore, +∥∇N2,q(e +i +2 t∆ψ0 + u)∥L1 +tL2x+Lq′ +t Lr′ +x ≤ CT (∥∇ψ0∥L2 + ∥u∥XT ) ++ C +����|(e +i +2 t∆ψ0 + u)q|2α∇e +i +2 t∆ψ0 +��� +Lq′ +t Lr′ +x ++ +���|(e +i +2 t∆ψ0 + u)q|2α∇u +��� +Lq′ +t Lr′ +x +� +≤ CT (2 +√ +2R + M) + CT +q−q′ +qq′ (2 +√ +2R + M)2α � +∥∇ψ0∥L2x + ∥∇u∥Lq +tLrx +� +≤ C +� +T + T +q−q′ +qq′ (2 +√ +2R + M)2α +� � +2 +√ +2R + M +� +. + +34 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Finally, +∥Φ(u)∥XT ≤ C +� +T + T +q−q′ +qq′ (2 +√ +2R + M)2α +� � +2 +√ +2R + M +� +. +We proceed to show that Φ defines a contraction on XT . Let ψ0 ∈ E(R3) such that +E(ψ0) ≤ R and u, v ∈ XT . Then, +dX (Φ(u), Φ(v)) ≤ +���N +� +e +i +2 t∆ψ0 + u +� +− N +� +e +i +2 t∆ψ0 + v +���� +N 0 +Inequality (2.35) implies that +���N1 +� +e +i +2 t∆ψ0 + u +� +− N1 +� +e +i +2 t∆ψ0 + v +���� +L1 +tL2 +x +≤ CT dX(u, v). +and +���N2,∞ +� +e +i +2 t∆ψ0 + u +� +− N2,∞ +� +e +i +2 t∆ψ0 + v +���� +L1 +tL2x +≤ CT dX(u, v). +Again inequality (2.35) allows us to control the remaining term as follows +���N2,q +� +e +i +2 t∆ψ0 + u +� +− N2,q +� +e +i +2 t∆ψ0 + v +���� +L1L2+Lr′ +t Lq′ +x +≤ CT ∥u − v∥L∞ +t L2x ++ CT +q−q′ +qq′ +� +ZT +� +e +i +2 t∆ψ0 + u +�2α ++ ZT +� +e +i +2 t∆ψ0 + v +�2α� +∥u − v∥Lq +t Lrx +≤ C +� +T + T +q−q′ +qq′ (2 +√ +2R + M)2α +� +dX(u, v). +Finally, +dX ((Φ(u), Φ(v)) ≤ C +� +T + T +q−q′ +qq′ (2 +√ +2R + M)2α +� +dX(u, v). +Therefore, it suffices to set M = +√ +R and to choose T = T (M) > 0 sufficiently +small in order to conclude that Φ : XT → XT and Φ defines a contraction on XT . +The Banach fixed-point Theorem yields a unique solution u ∈ XT to (4.2). In +particular, ψ(t) = e +i +2 t∆ψ0 + u(t) solves (1.1) with ψ ∈ C([0, T ]; E(R3)). +Uniqueness For R > 0 fixed, let ψ0 ∈ E(R3) with E(ψ0) ≤ R and ψ1, ψ2 ∈ +C([0, T ]; E(R3)) two solutions to (1.1) such that ψ1(0) = ψ2(0) = ψ0. We note that +ψ1 − ψ2 ∈ S1([0, T ] × R3). In particular, from the Strichartz estimate (2.25) and +arguing as for the local existence we obtain that +dX(ψ1, ψ2) ≤ ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R3) +≤ C +� +T + T +q−q′ +qq′ (ZT (ψ1)2α + ZT (ψ2)2α +� +dX(ψ1, ψ2). +Thus, there exists T1 > 0 sufficiently small such that ψ1 = ψ2 a.e. on [0, T1] × R3. +As T1 only depends on ZT (ψi) with i = 1, 2 one may iterate the argument. This +yields uniqueness in C([0, T ]; E(R3)). +Blow up alternative The proof of the blow-up alternative follows verbatim the +proof of the respective statement for d = 2, see Proposition 3.2 and is omitted. +Membership in Strichartz spaces Statement (2) of Proposition 4.1 follows +directly from the local existence argument and the properties of the free solution, +see (2.21) and (2.29). + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +35 +The proof of the continuous dependence on the initial data requires some pre- +liminary properties and is postponed after Lemma 4.4. +□ +In view of the equivalent characterisation of the energy space E(R3) provided by +Proposition 2.2, the well-posedness for (1.1) can be reduced to the well-posedness +of the following ”affine” problem. +Lemma 4.2. Given ψ0 ∈ E(R3), let ψ ∈ C([0, T ∗); E(R3)) be the unique max- +imal solution to (1.1) with initial data ψ0. +Then, there exists |c| = 1 and v ∈ +C([0, T ∗); Fc) such that ψ(t) = c+ v(t) for all t ∈ [0, T ∗) and where v is solution to +(4.3) +i∂tv = −1 +2∆v + f(|c + v|2)(c + v), +v(0) = v0. +Proof. The unique maximal solution exists in virtue of Proposition 4.1, Proposition +2.2 yields the decomposition ψ(t) = c(t) + v(t) for some |c(t)| = 1 and v(t) ∈ Fc for +all t ∈ [0, T ∗). In particular, c(0) = c and v(0) = v0. It suffices to show that c(t) = c +for all t ∈ [0, T ∗). From (2) Proposition 4.1 we infer ψ − ψ0 ∈ C([0, T ]; H1(R3)) +for all 0 < T < T ∗, namely ψ(t) = c(t) + v(t) and ψ0 = c + v0 share the same +far-field behavior for all t ∈ [0, T ]. It follows that c(t) = c for all t ∈ [0, T ] with +0 < T < T ∗. +□ +Given initial data ψ0 = c + v0, the solution ψ satisfies ψ = e +i +2 t∆ψ0 + Φ(ψ) ∈ +{c}+Fc(R3)+H1(R3). The connected component of E(R3) the solution ψ belongs +to is determined by the constant c, see Remark 2.3. Moreover, if ψ = c + v ∈ +C([0, T ); E(R3)) such that v solves (1.1), then �ψ = cψ = 1 + cv solves (1.1) and +�v = cv solves +(4.4) +i∂t˜v = −1 +2∆˜v + f(|1 + ˜v|2)(1 + ˜v), +˜v(0) = cv0. +It therefore suffices to consider c = 1. +Remark 4.3. Note that Lemma 4.2 reduces the well-posedness of (1.1) in E(R3) to +solving the affine problem (4.3) in Fc where the constant c is determined by the +choice of the initial data. In particular, the continuous dependence on the initial +data can be stated equivalently in terms of the metric (2.7) with the constants c1, c2 +determined by the initial data. +If the nonlinearity is such that f satisfies (1.13), then it is convenient to im- +plement the well-posedness result in homogeneous spaces by exploiting Strichartz +estimates on the gradient, see also [22, Remark 4.5] for (1.5) and [35, Proposition +1.1.18] for (1.1) with nonlinearity (1.4). Indeed, Assumption (1.13) ensures that +∇N is locally Lipschitz. A suitable choice of the functional spaces for the local +well-posedness is given by +XT = C([0, T ]; Fc(R3)) ∩ Lq([0, T ]; ˙W 1,r(R3)), +where the Strichartz admissible pair is for instance (q, r) = (10, 30 +13), see [35, Propo- +sition 1.1.18]. +However, in the framework of Assumption 1.1, this is ruled out by the lack of +regularity of the nonlinearity f. More precisely, for ∇N to be locally Lipschitz we +require (1.13). +We proceed to the proof of continuous dependence on the initial data for which +we exploit the decomposition of ψ given by Lemma 4.2. + +36 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Lemma 4.4. Let f satisfy Assumption 1.1, T > 0, (q, r) as defined in (4.1) and +ψ1, ψ2 ∈ C([0, T ]; E(R3)) such that ψi = ci + vi with ci ∈ C, |ci| = 1 and vi ∈ +C([0, T ]; Fc) for i = 1, 2. Then, there exists θ ∈ (0, 1] such that +∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R3) +≤ CT θ � +1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� +× +� +|c1 − c2| + ∥v1 − v2∥L2 +tL6x + ∥|ψ1| − |ψ2|∥L2 +tL2 +x +� +. +Proof. First, we notice that for N1, N2 defined in (2.30) it follows from the first +inequality of (2.36) and the decomposition ψi = ci + vi provided by Lemma 4.2 +that +∥N1(ψ1) − N1(ψ2)∥ +L1 +tL2x+L +4 +3 +t L +3 +2 +x +≤ ∥|c1 + v1| ||ψ1| − |ψ2||∥ +L1 +tL2 +x+L +4 +3 +t L +3 +2 +x + ∥||ψ2| − 1| |c1 − c2 + v1 − v2||∥ +L1 +tL2 +x+L +4 +3 +t L +3 +2 +x +≤ C +� +T +1 +2 + T +1 +4 ZT (ψ1) +� +∥|ψ1| − |ψ2|∥L2 +tL2x ++ CT +1 +4 ZT (ψ2) |c1 − c2| + CT +1 +2 ZT (ψ2))∥v1 − v2∥L2 +tL6x. +Second, we observe that L3(supp(N2(ψi))) ≤ ZT (ψi)2 for i = 1, 2 from (2.8). From +(2.36), we conclude +∥N2,∞(ψ1) − N2,∞(ψ2)∥L1 +tL2x ≤ CT (ZT (ψ1) + ZT (ψ2)) |c1 − c2| ++ CT +1 +2 +� +ZT (ψ1) +2 +3 + ZT (ψ2) +2 +3 +� +∥v1 − v2∥L2 +tL6 +x . +Third, we show the desired bound for N2,q(ψ1) − N2,q(ψ2). +As |ψi| ≥ +3 +2 on +supp(N2,q(ψi)), it follows from (2.36) that +|N2,q(ψ1) − N2,q(ψ2)| ≤ C +� +1 + |ψ1|2α + |ψ2|2α� +|ψ1 − ψ2| +≤ C +� +|ψ1|β + |ψ2|β� +|ψ1 − ψ2|, +with β = max{2, 2α}. Hence, it suffices to consider α ∈ [1, 2). We observe that +|N2,q(ψ1) − N2,q(ψ2)| ≤ C (1 + |ψ1,q|α + |ψ2,q|α) |ψ1 − ψ2|, +see also (3.6). Using again that L3(supp(N2(ψi))) ≤ ZT (ψi)2, one recovers +∥N2,q(ψ1) − N2,q(ψ2)∥N 0 ≤ ∥ψ1 − ψ2∥L1 +tL2x + ∥(|ψ1,q|2α + |ψ2,q|2α)|c1 − c2|∥ +L +4 +3 +t L +3 +2 +x ++ ∥(|ψ1,q|2α + |ψ2,q|2α)|v1 − v2|∥ +L +2 +3−α +t +L +6 +2α+1 +x +≤ CT (ZT (ψ1) + ZT (ψ2)) |c1 − c2| + CT +1 +2 +� +ZT (ψ1) +2 +3 + ZT (ψ2) +2 +3 +� +∥v1 − v2∥L2 +tL6x ++ C +� +ZT (ψ1)2α + ZT(ψ2)2α� +T +3 +4 |c1 − c2| + T +2−α +2 ∥v1 − v2∥L2 +tL6x +Combining the previous estimates, one concludes that there exists θ ∈ (0, 1] such +that +∥N(ψ1) − N(ψ2)∥N 0 ≤ CT θ � +1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� +× +� +|c1 − c2| + ∥v1 − v2∥L2 +tL6x + ∥|ψ1| − |ψ2|∥L2 +tL2x +� +. +□ + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +37 +We now prove continuous dependence on the initial data. As in the proof of +Proposition 3.2, we rely on a auxiliary metric to compensate for the lack of regular- +ity of the nonlinearity f and to deal with the non-integrability of the wave-functions. +However, by virtue of Lemma 4.2, it suffices to consider the affine problem (4.3). +This decomposition enables us to implement an argument in L2([0, T ]; L6(R3)). In +particular, it is sufficient to prove sequential continuity. +Proof of Proposition 4.1 continued. Let R > 0, ψ0 ∈ E(R3) with E(ψ0) ≤ R and +ψn +0 ∈ E(R3) such that E(ψn +0 ) ≤ R and dE(ψ0, ψn +0 ) → 0. In particular, there exist +complex constants |c| = 1, |cn| = 1 and v0, vn +0 ∈ Fc such that +ψ0 = c + v0, +ψn +0 = cn + vn +0 . +It follows from the equivalence of metrics, see Proposition 2.2, that +δ(c + v0, cn + vn +0 ) → 0, +where δ is defined in (2.7). There exists T = T (2E(ψ0) > 0 such that the unique +solutions ψ, ψn ∈ C([0, T ]; E(R3)) to (1.1) with initial data ψ0, ψn +0 respectively +satisfy +ZT (ψ) + ZT (ψn) ≤ M +for sufficiently large n. Then, Lemma 4.2 implies that there exist v, vn ∈ C([0, T ]; Fc) +such that +ψ = c + v, +ψn = cn + vn. +The proof follows the same lines as the proof of Proposition 3.2. We proceed in +three steps corresponding to (3.16), (3.17) and (3.18) respectively. +Step 1: We show that there exists T1 = T1(M) > 0 such that +(4.5) ∥v − vn∥L2([0,T1];L6(R3)) + ∥|ψ| − |ψn|∥L2([0,T1];L2(R3)) ≤ Cδ(c + v0, cn + vn +0 ). +For the first contribution, we observe that +∥v − vn∥L2([0,T ];L6(R3)) += +���e +i +2 t∆ψ0 − c + Φ(ψ) − e +i +2 t∆ψn +0 + cn − Φ(ψn) +��� +L2 +tL6x +≤ +���e +i +2 t∆(ψ0 − ψn +0 ) − (ψ0 − ψn +0 ) +��� +L2 +tL6 +x ++ ∥v0 − vn +0 ∥L2 +tL6 +x + ∥N(ψ) − N(ψn)∥N 0 +≤ C(T + T +1 +2 )δ(c + v0, cn + vn +0 ) + ∥N(ψ) − N(ψn)∥N 0 , +where we used (2.25) in the second last inequality and (2.21) to control the difference +of the free solutions in the last inequality. More precisely, +���e +i +2 t∆(ψ0 − ψn +0 ) − (ψ0 − ψn +0 ) +��� +L2 +tL6x +≤ T +1 +2 +���e +i +2 t∆(∇ψ0 − ∇ψn +0 ) − (∇ψ0 − ∇ψn +0 ) +��� +L∞ +t L2x +≤ CT ∥∇ψ0 − ∇ψn +0 ∥L2x ≤ CT δ(c + v0, cn + vn +0 ). +To bound the second contribution in (4.5), we proceed as in (3.21). More pre- +cisely, we observe that (3.24) remains valid upon replacing the admissible Strichartz +pair (4, 4) for d = 2 by ( 8 +3, 4) for d = 3. Hence, the respective version of (3.24) +reads that there exists θ2 ∈ (0, 1] such that +(4.6) +∥|ψ| − |ψn|∥L2([0,T ];L2(R3)) ≤ CT θ2 � +1 + M + M 1+2α� +× +� +δ(c + v0, cn + vn +0 ) + ∥Φ(ψ) − Φ(ψn)∥S0 +� +. + +38 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Summing up and applying the Strichartz estimate (2.25), we conclude from Lemma +4.4 that there exists C = C(M) > 0 and θ > 0 such that +∥v − vn∥L2([0,T1];L6(R3)) + ∥|ψn| − |ψ|∥L2([0,T1];L2(R3)) ≤ CMT θ +× +� +δ(c + v0, cn + vn +0 ) + CMT θ � +∥v − vn∥L2 +tL6x + ∥|ψn| − |ψ|∥L2 +tL2x +�� +. +For T1 > 0 sufficiently small depending only on M, inequality (4.5) follows and +Step 1 is complete. +Step 2 We show that (4.5) implies that there exists T2 = T2(M) > 0 such that +(4.7) +∥∇v − ∇vn∥L∞([0,T2];L2(R3)) + ∥∇v − ∇vn∥Lq([0,T2];Lr(R3)) → 0, +as n → ∞ and where (q, r) as in (4.1). The proof follows closely the one of (3.17) +to which we refer for full details. In view of the Strichartz estimates of Lemma 2.14 +it follows +(4.8) +∥∇e +i +2 t∆(c + v0) − ∇e +i +2 t∆(c + v0)∥L∞ +t L2x + ∥∇e +i +2 t∆(c + v0) − ∇e +i +2 t∆(c + v0)∥Lq +tLrx +≤ C∥∇v0 − ∇vn +0 ∥L2x. +To control the non-homogeneous term, we recall that (2.34) yields +|∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ| ≤ C(1 + |ψq|2α)|∇ψ|. +More precisely, for G∞, Gq defined in (2.38) and upon applying (2.25), we split the +non-homogeneous term in +(4.9) +����i +� t +0 +e +i +2 (t−s)∆ (∇N(ψ) − ∇N(ψn)) (s)ds +���� +S0([0,T ]×R3) +≤ ∥(G∞)(ψ)|∇v − ∇vn|∥L1 +tL2x + ∥Gq(ψ)|∇v − ∇vn|∥Lq′ +t Lr′ +x ++ ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 +tL2x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ +t Lr′ +x +≤ CT ∥∇v − ∇vn∥L∞ +t L2x + CT +q−q +qq′ ZT (ψ)2α∥∇v − ∇vn∥Lq +tLrx ++ ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 +tL2x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ +t Lr′ +x . +Thus, for T2 > 0 sufficiently small so that +C +� +T2 + T +q−q′ +qq′ +2 +ZT (ψ)2α +� +≤ 1 +2, +we conclude from (4.8) and (4.9) that +∥∇v − ∇vn∥L∞([0,T2],L2(R3)) + ∥∇v − ∇vn∥Lq([0,T2],Lr(R3)) ≤ Cδ(c + v0, cn + vn +0 ) ++ ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 +tL2 +x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ +t Lr′ +x . +To conclude that (4.7) holds, it suffices to show that the second line of the right- +hand side converges to 0 as n goes to infinity. We proceed by contradiction assuming +that there exists a subsequence still denoted ψn such that there exists ε > 0 such +that for all n sufficiently large, +(4.10) +∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 +tL2 +x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ +t Lr′ +x ≥ ε. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +39 +Inequality (4.5) implies that up to extracting a further subsequence, still denoted +ψn, that ψn = cn + vn converges to ψ = c + v a.e. on [0, T ) × R3. By virtue of the +Assumption 1.1, one has that G∞, Gq are continuous. Therefore, +|(G∞(ψ) − G∞(ψn))| |∇v| → 0 +a.e. in +[0, T ) × R3, +|(Gq(ψ) − Gq(ψn))| |∇v| → 0 +a.e. in +[0, T ) × R3. +Further, +∥Gq(ψn)∥ +L∞ +t L +2(α+1) +2α +x +(R3) +≤ C∥|ψn|2α(1 − χ(ψn))∥ +L∞ +t L +2(α+1) +2α +x +(R3) +≤ L3 (supp(1 − χ(ψn))) +α +α+1 + ∥ψq,n∥2α +L∞L2(α+1) ≤ C +� +ZT (ψn) +2α +1+α + ZT (ψn)2α� +≤ C(M +2α +α+1 + M 2α), +for all n ∈ N, where we exploited (2.8), namely that the measure of supp(1−χ(ψn)) +is finite. We obtain that there exists φ ∈ L∞([0, T ]; L2(α+1)(R3)) such that |ψq,n| ≤ +φ a.e. on [0, T ) × R3. Therefore, we control +|(G∞(ψ) − G∞(ψn))| |∇v| ≤ C|∇ψ| ∈ L1([0, T ); L2(R3)), +|(Gq(ψ) − Gq(ψn))| |∇v| ≤ C +� +|ψ|2α + |φ|2α� +|∇ψ| ∈ Lq′([0, T ); Lr′(R3)). +The dominated convergence Theorem then implies that (4.10) is violated, (3.17) +follows and Step 2 is complete. +Step 3. It remains to show that +(4.11) +∥|ψ| − |ψn|∥L∞([0,T ];L2(R3)) → 0. +More precisely, we need to upgrade +∥|ψ| − |ψn|∥L2([0,T ];L2(R3)) → 0, +so that the convergence holds for almost all times t ∈ [0, T ]. The proof follows +closely the respective proof for d = 2, namely the proof of (3.18). We omit the +details. +□ +Next, we show a persistence of regularity property and that the Hamiltonian +energy H is conserved for regular solutions. Even though the proof is completely +analogous to one for d = 2, except that here we can exploit the affine structure +of the energy space E and Sobolev embeddings depend on the dimension. For the +sake of clarity, we provide the proof of this lemma. +Lemma 4.5. Let d = 3, f as in Assumption 1.1 and ψ0 ∈ E(R3) such that +∆ψ0 ∈ L2(R3). Then, the unique maximal solution ψ ∈ C([0, T ∗); E(R3)) satisfies +∆ψ ∈ C([0, T ]; L2(R3)), +∂tψ ∈ C([0, T ]; L2(R3)) +for all T ∈ [0, T ∗). Moreover, H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗)). +Proof. In view of Lemma 4.2, one has ψ(t) = c + v(t) for all t ∈ [0, T ∗) and +it suffices to consider v ∈ C([0, T ∗); Fc(R3)) solution to (4.3). The assumption +v0 ∈ Fc(R3) ∩ ˙H2(R3) yields that ∂tv(0) ∈ L2(R3). Indeed, by continuity in time +one has +i∂tv(0) = −1 +2∆v(0) + N(c + v)(0). +As v(0) = v0 ∈ Fc(R3) ∩ ˙H2(R3) ⊂ L∞(R3) it follows that N1(c + v0) ∈ L2(R3) +from (2.32) and N2(c + v0) ∈ L∞(R3) and hence in L2(R3) by means of (2.8). By + +40 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +differentiating the Duhamel formula in time and applying Corollary 2.13, it follows +that +i∂tv(t) = e +i +2 t∆ +� i +2∆v(0) − iN(c + v)(0) +� +− i +� t +0 +e +i +2 s∆∂t (N(c + v)(t − s)) ds += e +i +2 t∆∂tv − i +� t +0 +e +i +2 (t−s)∆ � +G1(c + v)∂tv + G2(c + v)∂tv +� +(s)ds +By means of the Strichartz estimates of Lemma 2.14, it follows for the admissible +pair (q, r) as in (4.1) and any 0 < T < T ∗ that +∥∂tv∥L∞([0,T ];L2(R3)) + ∥∂tv∥Lq([0,T ];Lr(R3)) +≤ 2∥∂tv(0)∥L2(R3) + +��G1(c + v)∂tv + G2(c + v)∂tv +�� +N 0([0,T ]×R3) , +with G1, G2 defined in (2.37). Upon splitting Gi in Gi,∞ and Gi,q, as in (2.38), it +follows that +∥Gi(c + v)|∂tv|∥N 0([0,T ]×R3) +≤ CT ∥∂tv∥L∞([0,T ];L2(R3)) + ∥|c + v|2α(1 − χ(c + v))|∂tv|∥N 0([0,T ]×R3) +≤ CT ∥∂tv∥L∞([0,T ];L2(R3)) + +��|(c + v)q|2α|∂tv +�� +Lq′ ([0,T ];Lr′(R3) +≤ CT ∥∂tv∥L∞([0,T ];L2(R3)) + T +q−q′ +qq′ ZT (c + v)2α∥∂tv∥Lq([0,T ];Lr(R3)) +Therefore, +∥∂tv∥L∞([0,T ];L2(R3)) + ∥∂tv∥Lq([0,T ];Lr(R3)) ≤ 2∥∂tv(0)∥L2(R3) ++ CT ∥∂tv∥L∞([0,T ];L2(R3)) + T +q−q′ +qq′ ZT (c + v)2α∥∂tv∥Lq([0,T ];Lr(R3)). +For 0 < T1 < T ∗ sufficiently small, it holds +∥∂tv∥L∞([0,T1];L2(R3)) + ∥∂tv∥Lq([0,T1];Lr(R3)) ≤ 4∥∂tv(0)∥L2(R3). +Further, +∥∆v∥L∞([0,T1];L2(R3)) ≤ 2∥∂tv∥L∞([0,T1];L2(R3)) + 2∥N(c + v)∥L∞([0,T1];L2(R3)) +≤ 2∥∂tv∥L∞([0,T1];L2(R3)) + 4ZT (c + v) + ∥|(c + v)q)|2α+1∥L∞([0,T1];L2(R3)) +Note that |(c + v)q| ≥ 2 and |v| ≥ 1 on supp(1 − χ(c + v)). If α ∈ (0, 1], then +∥|(c + v)q|2α+1∥L∞([0,T1];L2(R3)) ≤ C∥v∥1+2α +L∞([0,T1];L6(R3) ≤ CZT (c + v)1+2α. +If α ∈ (1, 2), then we apply the Gagliardo-Nirenberg inequality to obtain that +∥|(c + v)q|2α+1∥L∞([0,T1];L2(R3)) ≤ C∥v∥2−α +L∞([0,T1];L6(R3))∥∆v∥α−1 +L∞([0,T1];L2(R3)), +where we note that 0 < α − 1 < 1. It follows +∆v ∈ C([0, T1]; L2(R3)). +Finally, we conclude that H(c + v)(t) = H(c + v0) by performing the analogue +argument as in the proof of Lemma 3.5 for d = 2. +□ +Proof of Theorem 1.4 in 3D. It only remains to show that the Hamiltonian energy +is conserved for all solutions ψ ∈ C([0, T ∗), E(R3)) which follows from Proposition +4.1, approximation by smooth solutions by means of Lemma 2.10 together with +Lemma 4.5. +□ + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +41 +4.1. Global well-posedness. +Similar to the 2D case, the lack of a suitable +notion of (renormalized) mass and the lack of sign-definiteness of the Hamiltonian +energy H constitute the main obstacles for proving global well-posedness. +Assuming that F ≥ 0 allows one to control the functional E(·), in terms of which +the blow-up alternative in Proposition 4.1 is stated, by H(·), see Lemma 2.8. Global +existence is proven following closely the method detailed in Section 3.2 for d = 2. +Corollary 4.6. Let Assumption 1.2 be satisfied and in addition the nonlinear po- +tential F, defined in (1.3) be non-negative, namely F ≥ 0. +Then, the solution +constructed in Proposition 4.1 is global, i.e. T ∗ = +∞. +This proves Theorem 1.5 for d = 3. Exploiting the affine structure of the en- +ergy space E(R3), we also prove global well-posedness for a class of equations for +which the associated nonlinear potential F(|ψ|2) fails to be non-negative. More +precisely, we consider nonlinearities that are defocusing at leading order such as +e.g. competing power-type nonlinearities of the form +f(r) = a1(rα1 − 1) − a2(rα2 − 1). +where a1, a2 > 0 and 0 < α2 < α1 < 2. +Such equations arise for instance in +nonlinear optics to investigate self-focusing phenomena in a defocusing medium, +see [5, 45, 54]. We assume the defocusing nonlinearity to be dominant for large +intensities |ψ|2 >> ρ0 and focusing phenomena to occur for small intensities |ψ|2 ≤ +ρ0 where ρ0 is determined by the far-field. Upon a suitable scaling we may assume +that ρ0 = 1. +Assumption 4.7. Let f be a real-valued function satisfying Assumption 1.2 and +further of the form +f(r) = a1(rα1 − 1) + g(r) +with a1 > 0 and 0 < α1 < 2 and where g ∈ C0([0, ∞)) ∩ C1(0, ∞) is such that +g(1) = 0 and +|g(ρ)|, |ρg′(ρ)| ≤ C(1 + ρα2) +for all ρ ≥ 0 and with with 0 ≤ α2 < α1. In addition, F(ρ) > 0 for all ρ > 1. +Local well-posedness for (4.3) with f satisfying Assumption 4.7 is provided by +Theorem 1.4. We recall from Lemma that any ψ ∈ E(R3) admits the decomposition +ψ = c + v ∈ E(R3) with |c| = 1 and v ∈ Fc. In view of (4.4), it suffices to consider +c = 1. Following [43], for any ψ = 1 + v ∈ E(R3), we define +M(ψ) = H(ψ) + C0 +� +R3 |Re(v)|2 dx, +for a suitable C0 > 0. The functional M(·) is well-defined. Further, M(ψ) allows +one to control E(ψ). +Lemma 4.8. Let f satisfy Assumption 4.7, v ∈ F1. For all C0 > 0, there exists +C1 = C1(E(1 + v)) > 0 such that +M(1 + v) ≤ C1 (E(1 + v)) . +Furthermore, there exist C0, C2 > 0 such that +E(1 + v) ≤ C2M(1 + v). + +42 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Proof. To prove the first inequality, it suffices to observe that +∥Re(v)∥2 +L2(R3) ≤ +��|v|2 + 2 Re(v) +��2 +L2(R3) = +��|1 + v|2 − 1 +��2 +L2(R3) ≤ 2EGL(1 + v), +with EGL(1 + v) defined in (1.6). The claim then follows by arguing as in the proof +of Lemma 2.5. To show the second inequality, it suffices to prove that +E(1 + v) + C +� +R3 F−(|1 + v|2)dx +≤ C +�1 +2∥∇v∥2 +L2(R3) + +� +R3 F+(|1 + v|2)dx + C0 ∥Re(v)∥2 +L2(R3) +� +. +Let δ ∈ (0, 1) be such that the expansion (2.16) of F yields that +∥ (|1 + v| − 1) 1{||1+v|2−1|<δ}∥2 +L2(R3) ≤ Cl +� +R3 F(|1 + v|2)1{||1+v|2−1|<δ}dx. +for some Cl > 0. On the other hand, there exists Ch > 0 such that +� +R3 ||1 + v| − 1|2 1{|1+v|2≥1+δ}dx ≤ C +� +R3 +� +|1 + v|2 − 1 +� +1{|1+v|2≥1+δ}dx +≤ Ch +� +R3 F(|1 + v|2)1{|1+v|2≥1+δ}dx +by Assumption 4.7. +Let C := max{Cl, Ch}. +Note that supp(F−(|1 + v|2)) ⊂ +{|1+v|2 < 1−δ} and if |1+v|2 ≤ 1−δ, then necessarily Re(v) ∈ (−1− +√ +1 − δ, −1+ +√ +1 − δ). In particular, +{|1 + v|2 < 1 − δ} ⊂ {| Re(v)| > η, with η := 1 − +√ +1 − δ}, +from which we conclude +� +R3 +� +||1 + v| − 1|2 + CF−(|1 + v|2) +� +1{|1+v|2≤1−δ}dx ≤ 1 + C +η2 +� +R3 |Re(v)|2 dx. +Hence, there exists C0 > 0 such that the claim follows. This completes the proof. +□ +While M(1+v)(t) is not conserved for solutions to (4.4), it enjoys an exponential +bound in time. +Lemma 4.9. Let f satisfy Assumption 4.7, v0 ∈ F1 and v ∈ C([0, T ∗); F1) be the +unique maximal solution to (4.4) with initial data v0. Then there exists C > 0 such +that +M(1 + v)(t) ≤ eCtC1(E(1 + v0)) +for all t ∈ [0, T ∗), where C1 = C1(E(1+v0)) > 0 is as in Lemma 4.8. In particular, +there exists C3 = C3 (E(1 + v0)) > 0 such that +E(1 + v)(t) ≤ eCtC3 (E(1 + v0)) +for all t ∈ [0, T ∗). +Proof. In a first step, let v0 ∈ F1, i.e. 1 + v0 ∈ E(R3), such that ∆v0 ∈ L2(R3), +then 1 + v ∈ C([0, T ∗); E(R3)) and ∆v ∈ C([0, T ]; L2(R3)) for all 0 < T < T ∗ by +virtue of Theorem 1.4. It follows +d +dtM(ψ)(t) = C0 +d +dt +� +R3 |Re(v)|2 dx, + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +43 +where we exploited that H(ψ)(t) = H(ψ0) for all t ∈ [0, T ] from (4) Theorem 1.4. +Therefore, +d +dt +� +R3 |Re(v)|2 dx = −2 +� +R3 Re(v) Im(∆v)dx + 2 +� +R3 f(|1 + v|2) Re(v) Im(1 + v)dx +≤ +� +R3 |∇v|2 dx + 2 +� +R3 f(|1 + v|2) Re(v) Im(v)dx, +upon integrating by parts and Young’s inequality. The second term is bounded as +2 +� +R3 f(|1+v|2) Re(v) Im(1+v)dx = 2 +� +R3 f(|1+v|2) Im(v) Re(v)1{|1+v|2≤1−δ}dx ++ 2 +� +R3 f(|1 + v|2) Im(v) Re(v)1{||1+v|2−1|<δ}dx ++ 2 +� +R3 f(|1 + v|2) Im(v) Re(v)1{|1+v|2≥1+δ}dx =: I1 + I2 + I3, +with δ ∈ (0, 1) to be chosen later. We dispose of the terms separately and note that +if |1 + v|2 = |v|2 + 2 Re(v) + 1 < 1 − δ, then necessarily Re(v) ∈ (−1 − +√ +1 − δ, −1 + +√ +1 − δ). Hence, for η = 1 − +√ +1 − δ we obtain +|I1| ≤ C +η2 +� +R3 |Re(v)|2 dx. +In order to bound I2, we rely on the expansion (2.16) valid for all ρ ∈ (1 − δ, 1 + δ). +Upon using the local Lipschitz property of f and f(1) = 0, one has +|I2| ≤ C +� +R3(|1 + v|2 − 1)21{||1+v|2−1|<δ}dx ≤ C +� +R3 F(|1 + v|2)1{||1+v|2−1|<δ}dx. +It remains to control I3. In virtue of Assumption 4.7, it holds F(ρ) > 0 for all ρ > 1 +and there exist C > 0, R0 > 1 such that F(ρ) ≥ Cρ1+α1 for all ρ ≥ R0. It follows, +|I3| ≤ CR1+α1 +0 +m +� +R3 F(|ψ|2)1{1+δ≤|ψ|2≤R0}dx + C +� +R3 F(|ψ|2)1{|ψ|2≥R0}dx, +where m = +min +ρ∈[1+δ,R0] F(ρ) > 0. We conclude that there exists C > 0 such that +d +dtM(t) ≤ C +� +H(1 + v)(t) + +� +R3 F−(|1 + v|2)dx +� ++ C +η2 ∥ Re(v)∥2 +L2. +Further, using that supp(F−) ⊂ {|1 + v|2 < 1 − δ} ⊂ {| Re(v)| > η}, we infer +� +R3 F−(|1 + v|2)dx ≤ C +η2 ∥ Re(v)∥2 +L2. +Finally and upon increasing C0 if necessary, there exist C > 0 such that +d +dtM(t) ≤ CM(t). +In virtue of Lemma 4.8 and Gronwall’s Lemma, one has +M(1 + v)(t) ≤ eCtC1 +� +E(1 + v0) +� +, +where C1 as given in Lemma 4.8. The desired bound on E(1 + v)(t) then follows +from Lemma 4.8. The statement follows for any initial data of finite energy by +approximation, persistence of regularity and the continuous dependence on the +initial data provided by Lemma 2.10 and Theorem 1.4 respectively. +□ + +44 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +Global existence then follows from Lemma 4.9 and Theorem 1.4 by means of the +blow-up alternative. In particular, this completes the proof of Theorem 1.6. +5. Lipschitz continuity of the solution map +In this section, we provide the proof of Theorem 1.7. Namely, we show that +provided f satisfies (1.13) in addition to Assumption 1.1, then the solution map is +Lipschitz continuous on bounded sets of E(Rd). +Proof of Theorem 1.7. Let R > 0 and ψ1 +0, ψ2 +0 ∈ E(Rd) such that E(ψi +0) ≤ R for +i = 1, 2. Then, for all 0 < T < T ∗(OR) there exists M > 0 such that the unique +maximal solutions ψ1, ψ2 ∈ C([0, T ]; E(Rd)) satisfy +ZT (ψ1) + ZT (ψ2) ≤ M, +with ZT defined in (3.2). By virtue of (2.14), it follows that +(5.1) +dE(ψ1(t), ψ2(t)) ≤ C(1 + M)dE(e +i +2 t∆ψ1 +0, e +i +2 t∆ψ2 +0) ++ C(1 + M) +����−i +� t +0 +e +i +2 (t−s)∆ (N(ψ1(s)) − N(ψ2(s))) ds +���� +L∞([0,T ];H1(R3)) +≤ C(1 + M)dE(ψ1 +0, ψ2 +0) + C(1 + M) ∥N(ψ1) − N(ψ2)∥N 1([0,T ]×Rd) , +where we used (2.22) to control the distance of the free solutions and the Strichartz +estimate (2.25) to control the nonlinear flow. Lemma 3.4 and Lemma 4.4 for d = 2, 3 +respectively yield that +(5.2) ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×Rd) ≤ C(1 + M + M 2α)T θ sup +t∈[0,T ] +dE(ψ1(t), ψ2(t)). +It remains to control ∇N(ψ1)− ∇N(ψ2) in N 0([0, T ]× Rd). To that end, we recall +that ∇N(ψi) can be decomposed by means of the functions G∞(ψi), Gq(ψi) defined +in (2.38). One has that +(5.3) +∥∇N(ψ1) − ∇N(ψ2)∥N 0([0,T ]×Rd) +≤ ∥|G∞(ψ1)||∇ψ1 − ∇ψ2|∥L∞([0,T ];L2(Rd)) + ∥|Gq(ψ1)||∇ψ1 − ∇ψ2|∥N 0([0,T ]×Rd) ++ ∥|G∞(ψ1) − G∞(ψ2)||∇ψ2|∥N 0([0,T ]×Rd) + ∥|Gq(ψ1) − Gq(ψ2)||∇ψ2|∥N 0([0,T ]×Rd) . +Note that (2.39) yields that +|G∞(ψ1)| ≤ C, +|Gq(ψ1)| ≤ C(1 + |ψ1|2α). +Further, (1.13) yields that G∞, Gq are locally Lipschitz, namely, +(5.4) +|G∞(ψ1) − G∞(ψ2)| ≤ C ||ψ1| − |ψ2|| , +|Gq(ψ1) − Gq(ψ2)| ≤ C +� +1 + |ψ1|2β + |ψ2|2β� +||ψ1| − |ψ2|| , +wit β = max{0, α − 1 +2}. As |ψi| ≥ 1 on the support of Gq(ψi), we may assume in +the following that β ≥ 1. +In the following, we distinguish to cases. +Case 1: d = 2: Let the admissible pair (q1, r1)) = ( 2(α+1) +α +, 2(α + 1)), see also +(3.1). To bound the first line on the right hand side of (5.3), we observe that +∥|G∞(ψ1)||∇ψ1 − ∇ψ2|∥L1([0,T ];L2(R2)) ≤ CT ∥∇ψ1 − ∇ψ2∥L∞([0,T ];L2(R2)), + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +45 +and +∥|Gq(ψ1)||∇ψ1 − ∇ψ2|∥N 0([0,T ]×R2) ≤ T +1 +q′ +1 ZT (ψ1)2α∥∇ψ1 − ∇ψ2∥L∞([0,T ];L2(R2)). +To bound the first term of the second line on the right hand side of (5.3), one has +∥|G∞(ψ1) − G∞(ψ2)||∇ψ2|∥N 0([0,T ]×R2) ≤ C ∥|ψ1| − |ψ2||∇ψ2|∥L +4 +3 ([0,T ];L +4 +3 (R2)) +≤ T +1 +2 ∥|ψ1| − |ψ2|∥L∞([0,T ];L2(R2) ∥∇ψ2∥L4([0,T ];L4(R2)) +≤ T +1 +2 +� +1 + T + T +1 +q′ +1 ZT (ψ1)2α +� +ZT (ψ) ∥|ψ1| − |ψ2|∥L∞([0,T ];L2(R2)) , +where we used the Strichartz estimates (2.29), (2.25) and (3.4) in the last inequality. +To bound the second term of the line on the right hand side of (5.3), we have that +∥|Gq(ψ1) − Gq(ψ2)||∇ψ2|∥N 0([0,T ]×R2) +≤ C +��� +1 + |ψ1,q|2β + |ψ2,q|2β� +||ψ1| − |ψ2|| ∇ψ2| +�� +N 0([0,T ]×R2) +≤ +� +T +1 +2 ∥∇ψ∥L4L4 + T +1 +3 � +∥|ψ1,q|2β + |ψ1,q|2β∥L∞ +t L6 +x +� +∥∇ψ∥L3 +tL6x +� +∥|ψ1| − |ψ2|∥L∞ +t L2 +x +≤ +� +T +1 +2 + T +1 +3 � +ZT(ψ1)2β + ZT (ψ2)2β�� � +1 + T + T +1 +q′ +1 ZT (ψ1)2α +� +ZT (ψ) +· ∥|ψ1| − |ψ2|∥L∞ +t L2 +x +where we used the Strichartz estimates (2.29), (2.25) and (3.4) in the last inequality. +Combining the above estimates, we obtain that there exists T1 = T1(M) > 0 +sufficiently small so that +dE (ψ1(t), ψ2(t)) ≤ C(1 + M)dE(ψ1 +0, ψ2 +0) +for all t ∈ [0, T1]. Note that T1 only depends on M, one may hence iterate the +procedure N := ⌈ T +T1 ⌉ times to cover the time interval [0, T ]. This completes the +case d = 2. +Case 2: d = 3. The proof for d = 3 follows the same lines upon modifying +the space-time norms so that the pairs of exponents are Strichartz admissible for +d = 3. In particular, one relies on the endpoint Strichartz estimate (2.29) to bound +∇ψ2 ∈ L2([0, T ]; L6(R3)). +□ +If the solutions are global, i.e. T ∗(OR) = +∞, then Theorem 1.7 extends to the +following. +Corollary 5.1. Under the Assumptions of Theorem 1.7, if in addition f is such +that (1.1) is globally well-posed then for any R > 0, T > 0, there exists C > 0 +such that for all ψi +0 ∈ E(Rd), where i = 1, 2, with E(ψi) ≤ R the respective unique +solutions ψi ∈ C(R, E(Rd)) satisfy (1.14). +Acknowledgements +L.E.H. acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, +German Research Foundation) – SFB 1283/2 2021 – 317210226. + +46 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +References +[1] P. Antonelli, L. E. Hientzsch, and P. Marcati, On the Cauchy problem for the QHD +system with infinite mass and energy: applications to quantum vortex dynamics, in prepara- +tion. +[2] P. Antonelli, L. E. Hientzsch, and P. Marcati, On the low Mach number limit for +quantum Navier-Stokes equations, SIAM J. Math. Anal., 52 (2020), pp. 6105–6139. +[3] P. Antonelli, L. E. Hientzsch, P. Marcati, and H. Zheng, On some results for quantum +hydrodynamical models, in Mathematical Analysis in Fluid and Gas Dynamics, T. Kobayashi, +ed., vol. 2070, RIMS Kˆokyˆuroku, 2018, pp. 107–129. +[4] V. Banica and E. Miot, Global existence and collisions for symmetric configurations of +nearly parallel vortex filaments, Ann. Inst. Henri Poincar´e, Anal. Non Lin´eaire, 29 (2012), +pp. 813–832. +[5] I. V. Barashenkov, A. D. Gocheva, V. G. Makhankov, and I. V. Puzynin, Stability of +the soliton-like “bubbles”, Physica D Nonlinear Phenomena, 34 (1989), pp. 240–254. +[6] N. G. Berloff, Quantised vortices, travelling coherent structures and superfluid turbulence, +in Stationary and time dependent Gross-Pitaevskii equations, vol. 473 of Contemp. Math., +Amer. Math. Soc., Providence, RI, 2008, pp. 27–54. +[7] F. B´ethuel, P. Gravejat, and J.-C. Saut, Travelling waves for the Gross-Pitaevskii equa- +tion. II, Comm. Math. Phys., 285 (2009), pp. 567–651. +[8] F. Bethuel, G. Orlandi, and D. Smets, Vortex rings for the Gross-Pitaevskii equation, J. +Eur. Math. Soc. (JEMS), 6 (2004), pp. 17–94. +[9] F. Bethuel and J.-C. Saut, Travelling waves for the Gross-Pitaevskii equation. I, Ann. +Inst. H. Poincar´e Phys. Th´eor., 70 (1999), pp. 147–238. +[10] F. Bethuel and D. Smets, A remark on the Cauchy problem for the 2D Gross-Pitaevskii +equation with nonzero degree at infinity, Differential Integral Equations, 20 (2007), pp. 325– +338. +[11] R. Carles and G. Ferriere, +Logarithmic Gross-Pitaevskii equation, +arXiv preprint +arXiv:2209.14621, (2022). +[12] R. Carles and C. Sparber, On an intercritical log-modified nonlinear Schr¨odinger equation +in two spatial dimensions, to appear in Proc. Am. Math. Soc., (2022). +[13] T. Cazenave, Semilinear Schr¨odinger equations, vol. 10 of Courant Lecture Notes in Mathe- +matics, New York University, Courant Institute of Mathematical Sciences, New York; Amer- +ican Mathematical Society, Providence, RI, 2003. +[14] D. Chiron, Travelling waves for the Gross-Pitaevskii equation in dimension larger than two, +Nonlinear Anal., 58 (2004), pp. 175–204. +[15] +, Stability and instability for subsonic traveling waves of the nonlinear Schr¨odinger +equation in dimension one, Anal. PDE, 6 (2013), pp. 1327–1420. +[16] D. Chiron and M. Maris¸, Traveling waves for nonlinear Schr¨odinger equations with nonzero +conditions at infinity, Arch. Ration. Mech. Anal., 226 (2017), pp. 143–242. +[17] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao, Global well-posedness +and scattering for the energy-critical nonlinear Schr¨odinger equation in R3, Ann. of Math. +(2), 167 (2008), pp. 767–865. +[18] A. De Bouard, Instability of stationary bubbles, SIAM J. Math. Anal., 26 (1995), pp. 566– +582. +[19] A. de Laire, Non-existence for travelling waves with small energy for the Gross-Pitaevskii +equation in dimension N ≥ 3, C. R. Math. Acad. Sci. Paris, 347 (2009), pp. 375–380. +[20] C. Gallo, Schr¨odinger group on Zhidkov spaces, Adv. Differential Equations, 9 (2004), +pp. 509–538. +[21] +, The Cauchy problem for defocusing nonlinear Schr¨odinger equations with non- +vanishing initial data at infinity, Comm. Partial Differential Equations, 33 (2008), pp. 729– +771. +[22] P. G´erard, The Cauchy problem for the Gross-Pitaevskii equation, Ann. Inst. H. Poincar´e +Anal. Non Lin´eaire, 23 (2006), pp. 765–779. +[23] P. G´erard, The Gross-Pitaevskii equation in the energy space, in Stationary and time depen- +dent Gross-Pitaevskii equations, vol. 473 of Contemp. Math., Amer. Math. Soc., Providence, +RI, 2008, pp. 129–148. + +WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY +47 +[24] N. Gialelis and I. G. Stratis, Nonvanishing at spatial extremity solutions of the defocusing +nonlinear Schr¨odinger equation, Math. Methods Appl. Sci., 42 (2019), pp. 4939–4956. +[25] J. Ginibre and G. Velo, Scattering theory in the energy space for a class of nonlinear +Schr¨odinger equations, J. Math. Pures Appl. (9), 64 (1985), pp. 363–401. +[26] V. L. Ginzburg and L. P. Pitaevski˘ı, On the theory of superfluidity, Soviet Physics. JETP, +34 (7) (1958), pp. 858–861 (1240–1245 ˇZ. Eksper. Teoret. Fiz.). +[27] J. Grant and P. H. Roberts, Motions in a Bose condensate. III. the structure and effective +masses of charged and uncharged impurities, J. Phys. A: Math. Nucl. Gen., 7 (1974), pp. 260– +279. +[28] P. Gravejat, A non-existence result for supersonic travelling waves in the Gross-Pitaevskii +equation, Comm. Math. Phys., 243 (2003), pp. 93–103. +[29] P. Gravejat, E. Pacherie, and D. Smets, On the stability of the Ginzburg-Landau vortex, +Proc. Lond. Math. Soc. (3), 125 (2022), pp. 1015–1065. +[30] E. P. Gross, Hydrodynamics of a superfluid condensate, J. Math. Phys., 4 (1963), pp. 195– +207. +[31] Z. Guo, Z. Hani, and K. Nakanishi, Scattering for the 3D Gross-Pitaevskii equation, Comm. +Math. Phys., 359 (2018), pp. 265–295. +[32] S. Gustafson, K. Nakanishi, and T.-P. Tsai, Scattering for the Gross-Pitaevskii equation, +Math. Res. Lett., 13 (2006), pp. 273–285. +[33] +, Global dispersive solutions for the Gross-Pitaevskii equation in two and three dimen- +sions, Ann. Henri Poincar´e, 8 (2007), pp. 1303–1331. +[34] +, Scattering theory for the Gross-Pitaevskii equation in three dimensions, Commun. +Contemp. Math., 11 (2009), pp. 657–707. +[35] L. E. Hientzsch, Nonlinear Schr¨odinger equations and quantum fluids non vanishing at +infinity: incompressible limit and quantum vortices, PhD thesis, Gran Sasso Science Institute, +2019. +[36] +, On the low mach number limit for 2d Navier-Stokes-Korteweg systems, Mathematics +in Engineering, 5 (2023), pp. 1–26. +[37] L. H¨ormander, The analysis of linear partial differential operators. I, Classics in Mathe- +matics, Springer-Verlag, Berlin, 2003. Distribution theory and Fourier analysis, Reprint of +the second (1990) edition [Springer, Berlin; MR1065993 (91m:35001a)]. +[38] T. Kato, On nonlinear Schr¨odinger equations, Ann. Inst. H. Poincar´e Phys. Th´eor., 46 +(1987), pp. 113–129. +[39] +, Nonlinear Schr¨odinger equations, in Schr¨odinger operators (Sønderborg, 1988), +vol. 345 of Lecture Notes in Phys., Springer, Berlin, 1989, pp. 218–263. +[40] M. Keel and T. Tao, Endpoint Strichartz estimates, Amer. J. Math., 120 (1998), pp. 955– +980. +[41] R. Killip, J. Murphy, and M. Visan, The final-state problem for the cubic-quintic NLS +with nonvanishing boundary conditions, Anal. PDE, 9 (2016), pp. 1523–1574. +[42] +, The initial-value problem for the cubic-quintic NLS with nonvanishing boundary con- +ditions, SIAM J. Math. Anal., 50 (2018), pp. 2681–2739. +[43] R. Killip, T. Oh, O. Pocovnicu, and M. Vis¸an, Global well-posedness of the Gross- +Pitaevskii and cubic-quintic nonlinear Schr¨odinger equations with non-vanishing boundary +conditions, Math. Res. Lett., 19 (2012), pp. 969–986. +[44] Y. S. Kivshar, D. Anderson, and M. Lisak, Modulational instabilities and dark solitons +in a generalized nonlinear schr¨odinger equation, Physica Scripta, 47 (1993), p. 679. +[45] Y. S. Kivshar and B. Luther-Davies, Dark optical solitons: physics and applications, Phys. +Rep., 298 (1998), pp. 81–97. +[46] R. Klein, A. J. Majda, and K. Damodaran, Simplified equations for the interaction of +nearly parallel vortex filaments, J. Fluid Mech., 288 (1995), pp. 201–248. +[47] H. Koch and X. Liao, Conserved energies for the one dimensional Gross-Pitaevskii equation, +Adv. Math., 377 (2021), pp. Paper No. 107467, 83. +[48] E. A. Kuznetsov and J. J. Rasmussen, Instability of two-dimensional solitons and vortices +in defocusing media, Phys. Rev. E, 51 (1995), pp. 4479–4484. +[49] Z. Lin, Z. Wang, and C. Zeng, Stability of traveling waves of nonlinear Schr¨odinger equation +with nonzero condition at infinity, Arch. Ration. Mech. Anal., 222 (2016), pp. 143–212. +[50] M. Maris¸, Nonexistence of supersonic traveling waves for nonlinear Schr¨odinger equations +with nonzero conditions at infinity, SIAM J. Math. Anal., 40 (2008), pp. 1076–1103. + +48 +P. ANTONELLI, L.E. HIENTZSCH, AND P. MARCATI +[51] +, Traveling waves for nonlinear Schr¨odinger equations with nonzero conditions at in- +finity, Ann. of Math. (2), 178 (2013), pp. 107–182. +[52] H. Miyazaki, The derivation of the conservation law for defocusing nonlinear Schr¨odinger +equations with non-vanishing initial data at infinity, J. Math. Anal. Appl., 417 (2014), +pp. 580–600. +[53] H. Pecher, Unconditional global well-posedness for the 3D Gross-Pitaevskii equation for data +without finite energy, NoDEA Nonlinear Differential Equations Appl., 20 (2013), pp. 1851– +1877. +[54] D. E. Pelinovsky, Y. A. Stepanyants, and Y. S. Kivshar, Self-focusing of plane dark +solitons in nonlinear defocusing media, Phys. Rev. E (3), 51 (1995), pp. 5016–5026. +[55] L. Pitaevskii, Vortex lines in an imperfect Bose gas, Sov. Phys. JETP, 13 (1961), pp. 451– +454. +[56] L. Pitaevskii and S. Stringari, Bose-Einstein condensation and superfluidity, vol. 164 of +Int. Ser. Monogr. Phys., Oxford: Oxford University Press, 2016. +[57] C. Sulem and P.-L. Sulem, The nonlinear Schr¨odinger equation, vol. 139 of Applied Math- +ematical Sciences, Springer-Verlag, New York, 1999. Self-focusing and wave collapse. +[58] T. Tao and M. Visan, Stability of energy-critical nonlinear Schr¨odinger equations in high +dimensions, Electron. J. Differential Equations, (2005), pp. No. 118, 28. +[59] T. Tao, M. Visan, and X. Zhang, The nonlinear Schr¨odinger equation with combined +power-type nonlinearities, Comm. Partial Differential Equations, 32 (2007), pp. 1281–1343. +[60] M. I. Weinstein and J. Xin, Dynamic stability of vortex solutions of Ginzburg-Landau and +nonlinear Schr¨odinger equations, Comm. Math. Phys., 180 (1996), pp. 389–428. +[61] P. E. Zhidkov, The Cauchy Problem for the nonlinear Schr¨odinger equation, Communica- +tions of the Joint Institute for Nuclear Research. Dubna, R5-87-373, Joint Inst. Nuclear Res., +Dubna, 1987. With an English summary. +[62] P. E. Zhidkov, On the solvability of Cauchy problem and stability of some solutions to the +nonlinear Schr¨odinger equation, Mat. Model., 1 (1989), pp. 155–160. +[63] P. E. Zhidkov, Korteweg-de Vries and nonlinear Schr¨odinger equations: qualitative theory, +vol. 1756 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2001. +Gran Sasso Science Institute, viale Francesco Crispi, 7, 67100 L’Aquila, Italy +Email address: paolo.antonelli@gssi.it +Universit¨at Bielefeld, Fakult¨at f¨ur Mathematik, Postfach 10 01 31, 33501 Bielefeld, +Germany +Email address: lhientzsch@math.uni-bielefeld.de +Gran Sasso Science Institute, viale Francesco Crispi, 7, 67100 L’Aquila, Italy +Email address: pierangelo.marcati@gssi.it + diff --git a/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/load_file.txt b/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..217926f29750807cae19cdf0ae2dd3a3d54b062c --- /dev/null +++ b/5tAyT4oBgHgl3EQf2fk_/content/tmp_files/load_file.txt @@ -0,0 +1,2359 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf,len=2358 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='00751v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='AP] 2 Jan 2023 FINITE ENERGY WELL-POSEDNESS FOR NONLINEAR SCHR¨ODINGER EQUATIONS WITH NON-VANISHING CONDITIONS AT INFINITY PAOLO ANTONELLI, LARS ERIC HIENTZSCH, AND PIERANGELO MARCATI Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Cauchy-Problem for 2D and 3D nonlinear Schr¨odinger equa- tions with non-vanishing conditions at infinity is investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local well- posedness in the energy space for energy-subcritical nonlinearities merely sat- isfying Kato-type assumptions is proven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Our result provides the analogue of the well-established local H1-theory for solutions vanishing at infinity, no further regularity assumptions on the nonlinearity are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well- posedness is shown for defocusing nonlinearities provided that the nonlinear potential is non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In addition, we also introduce global well-posedness in 3D for a class of nonlinearities for which the Hamiltonian energy fails to be sign-definite such as e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' competing focusing-defocusing nonlinearities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Introduction This paper is devoted to the study of the Cauchy theory for a class of nonlinear Schr¨odinger equations posed on Rd with d = 2, 3, namely (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) i∂tψ = −1 2∆ψ + f(|ψ|2)ψ, equipped with non-trivial boundary conditions at infinity, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) |ψ(x)|2 → ρ0 as |x| → ∞, and where the nonlinearity satisfies f(ρ0) = 0 together with the Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 of Kato type stated below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Hamiltonian (coinciding with the total energy, in many relevant physical contexts) associated to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) reads (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) H(ψ) = � Rd 1 2|∇ψ|2 + F(|ψ|2)dx, with F(ρ) = � ρ ρ0 f(r)dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The finite energy assumption encodes far-field behavior, the study of which is mo- tivated by a variety of physical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The aim of this paper is to provide a well-posedness theory for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with energy-subcritical nonlinearities f, under Kato-type [38] regularity assumptions, in a suitable energy space incorporating the far-field (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Regarding the 3D-energy critical problem, we show that global well-posedness is easily achieved relying on the existing literature [43, 17, 59] combined with our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Without loss of generality we assume the following, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) to hold for ρ0 = 1 as the general case reduces to the former by a suitable scaling of ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Date: January 3, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2020 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Primary: 35Q55;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Secondary: 35B30, 37L50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' nonlinear Schr¨odinger equation, Gross-Pitaevskii, well-posedness, non- vanishing conditions at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1 2 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI The most prominent example for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), with far-field (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2), is the Gross-Pitaevskii GP equation for which f(ρ) = ρ − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' With this choice for f, the system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) arises in the description of Bose-Einstein condensates (BEC) [30, 55, 27, 56], as a model for superfluidity in Helium II close to the λ-point [26, 55] and for quantum vortices [6, 55].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Beyond that, system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with non-trivial far-field and general nonlinearities f is investigated in the theory of (BEC), superconductivity and nonlinear optics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For instance, competing (focusing-defocusing) see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16), saturating or exponential nonlinearities emerge as models in nonlinear optics [5, 45, 48, 54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further physically relevant models are listed in Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The mathematical analysis of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), with far-field behavior (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2), differs signifi- cantly from the usual H1-theory for NLS equations with trivial far-field, due to the non-integrability of the finite energy wave-functions, which may exhibit non-trivial oscillations at spatial infinity, in particular for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Opposite to the defocusing nonlinear Schr¨odinger equation with vanishing con- ditions at infinity for which scattering is known [25], system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with defocusing nonlinearity and equipped with (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) admits a large variety of special solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Concerning GP, the existence of sub-sonic traveling waves is known for d = 2 [9, 7] and d = 3 [9, 8, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Non-existence in the super-sonic regime is proven in [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While traveling waves exist for arbitrarily small energy for d = 2, non-existence of traveling waves with small energy for d = 3 is due to [7], see also [19] for d ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For general defocusing nonlinearities, including the nonlinearities considered in Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 below, the existence of sub-sonic traveling waves is introduced in [51, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Non-existence in the super-sonic regime is shown in [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, traveling waves exist for any and in particular arbitrarily small energy ruling out scattering, while for d = 3 there is an energy threshold below of which no traveling waves exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The stability of multi-dimensional traveling waves is addressed in [15, 49], stationary bubbles and their stability in [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The GP equation admits vortex solutions of infinite energy, see [55, 10] and [60, 29] as well as references therein for stability properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Regarding large time behavior, the existence of global dispersive solutions and small data scattering for the 3D and 4D-GP equation has been investigated in a series of papers [32, 33, 34, 31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [41, 42], the authors consider the final state problem for the 3D defocusing cubic-quintic equation which is energy critical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For general nonlinearities f, the respective problems remain open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To give a short overview of previous well-posedness results, we mention that local existence of solutions to the GP equation in Zhidkov spaces has been introduced in [61] for d = 1, see also [63], and [20] for the multi-dimensional case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While the energy space for GP for d = 1 coincides with the set of functions in the Zhidkov space such that |ψ|2 − 1 ∈ L2(R), this identification does not hold true in the multi-dimensional case, see [22] and Section 2 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [9], the authors show that the GP equation is well-posed in 1 + H1(Rd) for d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness in 1 + Hs(R3) with s ∈ (5/6, 1) is proven in [53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, the space 1 + H1(Rd) is strictly smaller than E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' There exist traveling waves for the GP eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' in the energy space that do not belong to 1+L2(Rd), see [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness in the energy space for the multi-dimensional GP eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' has been introduced in the seminal paper [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One of the major novelties of [22] consists in the precise characterization of the energy space as complete metric space and the action of the free propagator WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 3 on the energy space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A more general class of defocusing and energy-subcritical C3-nonlinearities has been considered in [21] with subsequent improvement to C2- nonlinearities [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [21, 52], the authors crucially rely on a smooth decomposition of wave-functions in the energy space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The authors show global well-posedness in affine spaces determined by this decomposition which requires the aforementioned regularity assumptions and precise growth conditions for f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The result in the affine spaces then implies well-posedness in the energy space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Our purpose is to prove local well-posedness assuming merely Kato-type regular- ity assumptions [38] under which local well-posedness is also known for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with vanishing conditions at infinity, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' the monograph [13, Chapter 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We com- plement the local analysis by global results under suitable additional assumptions on the nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the paper [11], the authors prove global existence of unique mild solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with a logarithmic nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let us point out that our well-posedness result will also be useful in the study of a class of quantum hydrodynamic (QHD) systems with non-trivial far-field [1], see also [3, 35] for some previous results in this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The analysis of the Cauchy problem for QHD systems with non-zero conditions at infinity is pivotal to initiate a rigorous study of some relevant physical phenomena described by quantum fluid models, see for instance [6, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assumptions and Main results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Our main assumptions on the nonlin- earity f are the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be a real-valued function satisfying the following Kato-type assumptions, namely (K1) f ∈ C([0, ∞)) ∩ C1((0, ∞)) such that f(1) = 0, (K2) the nonlinearity is energy-subcritical, namely there exists α > 0, with α < ∞ for d = 2 and α < 2 for d = 3, such that |f(ρ)|, |ρf ′(ρ)| ≤ C(1 + ρα) for all ρ ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The assumptions (K1), (K2) are commonly referred to as Kato-type assump- tions, see [38, 39] and also [13, Chapter 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For trivial far-field behavior, namely integrable wave-functions ψ, these assumptions correspond to the state of the art for the H1-well-posedness for energy-subcritical nonlinearities f, see [13] and references therein for a detailed overview of the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In order to infer global results, we also require the nonlinearity to be defocusing in the following sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be as in Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, assume f ′(1) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assuming the nonlinearity f to be defocusing yields that F achieves a local minimum for the constant solution |ψ|2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In nonlinear optics, this assumption is made in physical literature in order to ensure modulational stability of the constant equilibrium solution, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' the continuous wave background [44, 54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Due to the non- trivial farfield behavior, inferring global results turns out to be more intricate than in the respective integrable case which leads to additional assumptions, see Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy-subcritical power-type nonlinearities constitute an example of non- linearities that satisfy Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 but in general not covered by [21, 22, 52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 4 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy-subcritical power-type nonlinearities read (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) f(|ψ|2) = λ(|ψ|2α − 1), with λ = ±1 and � α > 0 d = 2, 0 < α < 2 d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' These nonlinearities being included in Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 merely satisfy f ∈ C0,α([0, ∞)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Previous results require λ = +1 and α = 1 [22], f ∈ C3([0, ∞)) [21], f ∈ C2([0, ∞)) [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The corresponding nonlinear potential reads F(|ψ|2) = � |ψ|2 1 f(r)dr = λ α(α + 1) � |ψ|2(α+1) − 1 − (α + 1)(|ψ|2 − 1) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For λ = 1, we note that F : [0, ∞) → R is non-negative, convex and with global minimum achieved by |ψ|2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For λ = α = 1, system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with nonlinearity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) corresponds to the GP-equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) i∂tψ = −1 2∆ψ + (|ψ|2 − 1)ψ, for which the associated Hamiltonian energy H(ψ) becomes the well-known Ginzburg- Landau energy functional (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6) EGL(ψ) := H(ψ) = � Rd 1 2|∇ψ|2 + 1 2(|ψ|2 − 1)2dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) in the energy space has been established in [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) is studied in [22] in the space of states where the associated Hamiltonian is finite, namely (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) EGL = {ψ ∈ L1 loc(Rd) : H(ψ) < +∞} = {ψ ∈ L1 loc(Rd) : ∇ψ ∈ L2(Rd), |ψ|2 − 1 ∈ L2(Rd)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the present paper, we define the energy space in the spirit of [62, 63, 16] as (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) E(Rd) = {ψ ∈ L1 loc(Rd) : E(ψ) < ∞} with (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) E(ψ) = � Rd |∇ψ|2 + ||ψ| − 1|2 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It is straightforward to see that E ⊂ EGL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, as it will be clear later, see Lemmata 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8, the two spaces E and EGL turn out to be equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Working in E rather than EGL is more convenient in several aspects when dealing with a general class of nonlinearities f satisfying Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Wave functions in E(Rd) may exhibit oscillations at spatial infinity due to the non-vanishing far-field behavior, especially for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Since ψ /∈ Lp(Rd) for any p ≥ 1, the mass is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As its properties are central to the well-posedness theory, a detailed analysis of E(Rd) is provided in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' At this stage, we only mention that E(Rd) ⊂ {H(ψ) < +∞} and that E(Rd) ⊂ X1(Rd)+H1(Rd), where X1 denotes the Zhidkov space [61, 63] defined by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) X1(Rd) = {ψ ∈ L∞(Rd) : ∇ψ ∈ L2(Rd)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While E is not a vector space, we notice that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) dE(ψ1, ψ2) = ∥ψ1 − ψ2∥X1+H1 + ∥|ψ1| − |ψ2|∥L2 WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 5 defines a metric on E and (E, dE) is a complete metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall that for a sum of Banach spaces, the norm is defined by ∥ψ∥X1+H1 = inf {∥ψ1∥X1 + ∥ψ2∥H1 : ψ = ψ1 + ψ2} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Our first main result introduces local well-posedness for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) in the energy space E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It suffices to consider positive existence times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local existence for negative times follows from the time reversal symmetry of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be such that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is satisfied, then (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is locally well-posed in the energy space E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, (1) for any ψ0 ∈ E(Rd), there exists a maximal time of existence T ∗ > 0 and a unique solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) with initial data ψ(0) = ψ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The blow-up alternative holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Namely, either T∗ = ∞ or (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12) lim tրT ∗ E(ψ)(t) = +∞;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2) ψ − ψ0 ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(Rd));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (3) the solution depends continuously on the initial data with respect to the topology induced by the metric dE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (4) it holds H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (5) if in addition ∆ψ0 ∈ L2(Rd) then ∆ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that (2) of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 states that ψ and ψ0 share the same far-field behavior, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' belong to the same connected component of E(Rd) for all t ∈ [0, T ∗), see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, it can be shown that the nonlinear flow ψ − e i 2 t∆ψ0 belongs to the full range of Strichartz spaces, see Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 for d = 2, 3 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The precise notion of continuous dependence on the initial data is given in Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The topological structure of the metric space (E(Rd), dE) differs for d = 2 and d = 3, see [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 3, the energy space E(R3) has an affine structure;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' if ψ ∈ E(R3) then ψ = c + v for some c ∈ S1, v ∈ ˙H1(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, unbounded phase oscillations may occur at spatial infinity that rule out to characterize the connected components by a constant c ∈ S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The space (E(R2), dE) is not separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given its relevance for the well-posedness theory, this question is going to be addressed in detail in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, one may introduce a weaker topology that restores separability and connectedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that this affine structure of the energy space is available for higher dimensions d ≥ 4 to which our approach adapts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As E(R) ⊂ X1(R), the local well-posedness theory simplifies for d = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Previous results [20, 21, 24] do not cover the full generality of Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We expect our approach to extend to d = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is not sufficient in order to prove that the solution map is Lips- chitz continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This is analogue to the case of NLS equations (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with vanishing far-field behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, for instance for power-law type nonlinearities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) Lips- chitz continuity of the solution map can only be expected if α ≥ 1 2 for both vanishing and non-vanishing far-field, see [13, Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5] and Section 5 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that while in the former continuity is intended with respect to the H1-topology, the latter is stated with respect to the topology on E induced by the metric dE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We identify suitable additional Assumptions that allow to prove Lipschitz continuity of the solution map, see Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The conservation of the Hamiltonian energy H turns out to be insufficient to show global well-posedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Two main difficulties occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we may not rely on 6 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI the conservation of mass which is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' No suitable notion of a ”renormalized” mass being conserved seems to be available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, the Hamiltonian H is not sign-definite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the case of trivial far-field, one relies on the conservation of mass and the Hamiltonian energy provided it has a sign to infer global existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For sign-indefinite Hamiltonian energies, also the respective H1-theory for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) fails in general to provide global existence results without further assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Blow-up occurs for instance for certain focusing nonlinearities, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the framework of non-trivial far-field without further assumptions on f we lack both conservation of mass and sign-definite Hamiltonian energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A sufficient condition allowing for a control of E(ψ) in terms of H(ψ) consists in assuming the nonlinear potential F to be non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be such that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 is satisfied and the nonlinear potential F defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) is non-negative, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' F ≥ 0, then (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is globally well-posed in the energy space E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While to identify the optimal assumptions on f allowing for a global result goes beyond the scope of this work, we refer the reader to Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 for a discussion of possible generalizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that the pure power-type nonlinearities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) satisfy F ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Furthermore, we provide a global well-posedness result for d = 3 and a class of competing (focusing-defocusing) nonlinearities f for which the nonlinear poten- tial fails to be non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Such models are of physical relevance for instance in nonlinear optics when self-focusing phenomena in a defocusing background are considered [5, 54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 3, f be such that Assumptions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 are satisfied and further be of the form f(r) = a(rα1 − 1) + g(r) with a > 0, 0 < α1 < 2 and where g ∈ C0([0, ∞)) ∩ C1(0, ∞) is such that |g(ρ)|, |ρg′(ρ)| ≤ C(1 + ρα2) for all ρ ≥ 0 with 0 ≤ α2 < α1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In addition, let F be such that F(ρ) > 0 for all ρ > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is globally well-posed in the energy space E(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The assumption on the roots of F allows for physically relevant nonlinearities to be studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It appears from the physics literature [5, 45, 54] that in relevant applications the largest root of F corresponds to the far-field behavior ρ0 = 1 and constitutes a local minimum of F which is linked to the modulational stability of the continuous background wave [44, 54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To obtain global existence, we rely on the aforementioned affine structure of the energy space E(R3) and consider the quantity M(ψ) = H(ψ) + C0∥|ψ|2 − 1∥2 L2 which we show to satisfy an exponential bound in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As for d = 2 rapid and unbounded phase oscillations at spatial infinity may occur, the problem of global existence for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with defocusing nonlinearities of low regularity and non-sign definite Hamiltonian energies appear to be much more intricate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To the best of our knowledge, the problem of global existence for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) and focusing nonlinearities remains open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We complement our analysis by identifying a sufficient condition for f in order to prove Lipschitz continuity of the solution map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 7 Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3 and f satisfy Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If in addition, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) f ∈ C1([0, ∞)) ∩ C2((0, ∞)), |√ρf ′(ρ)| , ���ρ 3 2 f ′′(ρ) ��� ≤ C(1 + ρmax{0,α− 1 2 }), then the solution map is Lipschitz continuous on bounded sets of E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Namely, for any r, R > 0 and ψ∗ 0 ∈ E(Rd) such that E(ψ∗ 0) ≤ R let Or := {ψ0 ∈ E(Rd) : d(ψ0, ψ∗ 0) ≤ r}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists T ∗(Or) > 0 such that ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) for all initial data ψ(0) = ψ0 ∈ Or.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, for any 0 < T < T ∗(Or) there exists C > 0 such that for any ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) with initial data ψ1 0, ψ2 0 ∈ Or it holds (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14) sup t∈[0,T ] dE(ψ1(t), ψ2(t)) ≤ CdE(ψ1 0, ψ2 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Provided that the solutions are global, then the Lipschitz continuity holds for arbitrary times, see Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The main steps of our approach are briefly sketched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we identify the suitable mathematical setting for our analysis, namely the energy space E, see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We crucially rely on the fact that (E, dE) is a complete metric space as well as the properties of the free propagator introduced in [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Hamiltonian H is well-defined for functions in E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While wave-functions in d = 3 can be decomposed as ψ = c + v with |c| = 1, c ∈ C and v ∈ ˙H1(R3), for d = 2 the wave-functions may exhibit unbounded oscillations of the phase at spatial infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This motivates to treat separately the well-posedness problem for d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In both cases, we show local existence of a solution in the affine space ψ = ψ0 + H1(Rd) by a perturbative Kato-type argument [38] and also [13, Chapter 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Subsequently, uniqueness in C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) is proven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The fixed-point argument only provides continuous dependence with respect to perturbations in the space ψ0 + H1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof of continuous dependence on the initial data with respect to the topology induced by the metric dE requires additional estimates and differs in a substantial way from the H1-well-posedness theory for NLS-equations with vanishing conditions at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This is due to the non-integrability of wave-functions and the intricate topological structure of the energy space linked to the far-field behavior including oscillations of the phase and the low regularity of the nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness is shown relying on the conservation of the Hamiltonian H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While our method for the 3D-theory exploits the particular structure of the energy space, the approach used for d = 2 can easily be adapted to sub-cubic nonlinearities for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, for super-cubic nonlinearities, we exploit the affine structure of E(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It is then no longer sufficient to work in L2-based spaces as done for d = 2 but we need that the gradient of the solution belongs to the full range of Strichartz spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Our approach enables us to weaken the regularity assumptions compared to previous papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [21, 52] the authors rely on a decomposition of the initial data as ψ = ϕ + H1 with ϕ ∈ C∞ b and develop a well-posedness theory in the affine space ϕ + H1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This approach requires additional regularity assumptions on f that are not needed for our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As it will become clear from the proofs, our method adapts to prove well- posedness for energy-sub-critical nonlinearities for d ≥ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the energy-critical critical quintic equation, one may proceed as described in Section 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 8 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI We conclude this section by providing further examples of physical relevance that enter the class of nonlinearities characterised by Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Beyond the mentioned power-type nonlinearities, the following are examples of physically relevant nonlinearties and far-field (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2): (1) competing nonlinearities f(ρ) = aρα1 − bρα2 + c with a, b, c > 0 and σ1 ≥ σ2 ≥ 0 that arise in the description of self-focusing phenomena in defocusing media [48, 45, 54], see also [57, 63], (2) saturated nonlinearities f(ρ) = ρ 1+γρ − 1 1+γ with γ > 0, see for instance [57, Chapter 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3] and references therein, (3) exponential nonlinearities f(ρ) = (e−γ − e−γρ) with γ > 0 [57, Chapter 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3], (4) transiting nonlinearities of the form f(ρ) = 2ρ � 1 + α tanh � γ(ρ2 − 1 �� oc- curring in nonlinear optics [54, Section VI], (5) logarithmic nonlinearities of type f(ρ) = ρ log(ρ) which arise in the context of dilute quantum gases, see [12] and references therein, (6) the nonlinearity f(ρ) = ρ−1(ρ − 1) arises in the study of 1D-NLS type equations as model for nearly parallel vortex filaments, see [46] and [4, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The cubic-quintic equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) falls within (1) of the aforementioned list and is also recovered in the small amplitude approximation of (2) and (3) of the above examples [57, Chapter 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy-critical equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We briefly discuss the Cauchy problem for the energy-critical equation for d = 3, namely the quintic equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) i∂tψ = −1 2∆ψ + (|ψ|4 − 1)ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The well-posedness of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) is not addressed by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local well-posedness for small data is introduced in [21, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Furthermore, note that the cubic- quintic equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) i∂tψ = −1 2∆ψ + � α5|ψ|4 − α3|ψ|2 + α1 � ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' with α1, α3, α5 > 0, α2 3 − 4α1α5 > 0 and far-field (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) is known to be globally well- posed in the respective energy space due to [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The cubic-quintic nonlinearity considered satisfies Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and is such that F(1) = 0 and F(ρ) > 0 for all ρ > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The authors rely on the affine structure of the respective energy space for d = 3, the perturbative approach introduced in [58, 59] and the well-posedness of the energy-critical nonlinear Schr¨odinger equation with trivial far-field [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This approach can be adapted to show global well-posedness of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, it is straightforward to update the perturbative argument, see [43, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14) and (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15)] to the respective problem for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15), see also (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Outline of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The remaining part of the paper is structured as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Section 2 provides preliminary results on the energy space E, its structure and the action of the Schr¨odinger group on E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Useful estimates for the nonlinearity are collected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Section 3 introduces first local and second global well-posedness for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 are proven for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In Section 4, we provide the respective proofs for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 is WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 9 proven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, Section 5 is devoted to the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7 and Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We fix some notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We denote by Ld the d-dimensional Lebesgue measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The usual Lebesgue spaces are denoted by Lp(Ω) for Ω ⊂ Rd and Lebesgue exponent p ∈ [1, ∞].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sobolev spaces are denoted by Hs(Rd) with norm ∥f∥Hs(Rd) = ∥ ⟨ξ⟩s ˆf∥L2, where ˆf denotes the Fourier transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For k ∈ Z and r ∈ [1, ∞], we denote W k,r for the Sobolev space with norm ∥f∥W k,r = � |α|≤k ∥Dαf∥Lr(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Mixed space-time Lebesgue or Sobolev spaces are indicated by Lp(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' W k,r(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To shorten notations, we write Lp t W k,r x when there is no ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, C(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hs(Rd)) and C(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) denote the space of continuous Hs- and E-valued functions respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, C > 0 denotes any absolute constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy space and the linear propagator In the present paper, we define the energy space E as in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8), see also [16, Section 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the GP equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5), being the prototype for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with non- vanishing far-field, the energy space considered in [22, 23] consists of the set of wave-functions of finite Ginzburg-Landau energy EGL(ψ) is more convenient when dealing with general nonlinearities f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In general, E ⊂ {H(ψ) < +∞} while the converse inclusion only holds under further assumptions on f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy space (E, dE), endowed with the metric (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) can be shown to be a complete metric space and be thought of as the analogue of H1 for NLS equations with trivial far-field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, E is not a vector space and wave functions ψ ∈ E(Rd) may exhibit oscillations at spatial infinity, in particular for low dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A suitable characterisation of the energy space and the action of the Schr¨odinger semigroup on E is essential for the subsequent well-posedness theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Despite many of the facts proven here can be found in the literature [22, 23, 16], we provide a self-contained characterisation of the energy space E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We start by proving that any ψ ∈ E(Rd) can be decomposed as sum of a X1- function and an H1-function, where the Zhidkov space X1(Rd) is defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Following [22, Lemma 1], let χ ∈ C∞ c (C, R) be a smooth cut-off function such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) χ(z) = 1 |z| ≤ 2, χ(z) ≤ 1 z ∈ C, supp(χ) ⊂ B3(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, given a wave-function ψ : Rd → C we introduce (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) ψ∞ := χ(ψ)ψ, ψq := (1 − χ(ψ))ψ for which we have the following bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy space (E(Rd), dE) with dE defined by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) is a complete metric space and is embedded in X1(Rd) + H1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, for any ψ ∈ E one has ∥ψ∞∥X1(Rd) ≤ C � 1 + � E(ψ) � , ∥ψq∥H1(Rd) ≤ C � E(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, the energy space is stable under H1 perturbations, in the sense that E(Rd) + H1(Rd) ⊂ E(Rd) with (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) E(ψ + u) ≤ 2E(ψ) + 2∥u∥2 H1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 1, one has E(R) ⊂ X1(R) due to Sobolev embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 10 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given the decomposition (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2), we show that ψ∞ ∈ X1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As ψ∞ ∈ L∞(Rd) it suffices to check that ∥∇ψ∞∥L2(Rd) = ∥χ(ψ)∇ψ + ψχ′(ψ)∇ψ∥L2(Rd) ≤ C∥∇ψ∥L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The bound ψq ∈ L2(Rd) follows from the pointwise inequality |ψq| ≤ C ||ψq| − 1| valid on the support of 1 − χ(ψ) and ∥∇ψq∥L2(Rd) ≤ C∥∇ψ∥L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To prove (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3), it suffices to observe that if ψ ∈ E(Rd) and u ∈ H1(Rd), then ∥∇(ψ + u)∥2 L2(Rd) ≤ 2∥∇ψ∥2 L2(Rd) + 2∥∇u∥2 L2(Rd), ∥|ψ + u| − 1∥2 L2(Rd) ≤ 2∥|ψ| − 1∥2 L2(Rd) + 2∥u∥2 L2(Rd) by means of Minkowski’s inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to prove that (E, dE) is a complete metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One readily verifies that dE defines a distance function on E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To check that (E, dE) is complete, let {ψn}n ⊂ E be a Cauchy sequence w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='t to dE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists ψ ∈ X1 + H1 such that ψn → ψ strongly in X1 + H1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By lower semi-continuity of norms and (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) it follows that ψ ∈ E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The structure of the energy space depending on the dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The structure of the energy space E(Rd) is sensitive to the dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To illustrate this, we recall the following fact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let φ ∈ D′(Rd), if ∇φ ∈ Lp(Rd) for some p < d, then there exists c ∈ C such that φ−c ∈ Lp∗(Rd), where p∗ = dp d−p, see for instance [37, Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, if ψ ∈ E(R3), then ψ admits a decomposition ψ = c+v where c ∈ C with |c| = 1 and v ∈ ˙H1(R3), where (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) ˙H1(R3) = {v ∈ L6(R3) : ∇v ∈ L2(R3)}, denotes the completion of C∞ 0 (R3) with respect to the L2 norm of the gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This observation allows for a equivalent definition of E(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As in [22, Section 4], we introduce (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) Fc = � v ∈ ˙H1(R3) : |v|2 + 2 Re(c−1v) ∈ L2(R3) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One readily checks that ˜δ(u, v) = ∥∇u − ∇v∥L2(R3) + ∥|u|2 + 2 Re(c−1u) − 2 Re(c−1v) − |v|2∥L2(R3) defines a distance function on Fc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One has the following characterisation given by [22, Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 ([22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 3, the energy space E(R3) can be identified with the set of functions (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6) E(R3) = {ψ = c + v, c ∈ C, |c| = 1, v ∈ Fc} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover the metric function dE is equivalent to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) δ(c + v, ˜c + ˜v) = |c − ˜c| + ∥∇v − ∇˜v∥L2(R3) + ��|v|2 + 2 Re(c−1v) − |˜v|2 − 2 Re(˜c−1˜v) �� L2(R3) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [22], the Proposition is stated for (EGL, dEGL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We prove below, see Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6, that the two metric spaces can be identified and the equivalence of the metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 11 Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We observe that the connected components of E(R3) are given by c + Fc(R3) for c ∈ C with |c| = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The energy space E(R3) is an affine space and the far-field behavior is determined by c corresponding to a phase shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The affine structure of the energy space allows for an alternative approach to solve the Cauchy Problem for d = 3, as observed in [22, Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5] for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) and exploited in [43] for cubic-quintic nonlinearities and far-field behavior (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The 2D energy space E(R2) lacks an affine structure due to non- trivial oscillations at spatial infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, unbounded phase oscillations at spatial infinity may occur, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ψ(x) = ei(2+log |x|)β with β < 1 2 is such that ψ ∈ E(R2), see [22, Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, the metric space (E(R2), dE) is not separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We refer to Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7 for a detailed discussion and a weakened topology for which E(R2) is connected and separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Hamiltonian for wave-functions in the energy space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We observe that if ψ ∈ E(Rd), then it follows from the Chebychev inequality that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) Ld({||ψ| − 1| > δ} ≤ 1 δ2 ∥|ψ| − 1∥2 L2(Rd), where Ld denotes the d-dimensional Lebesgue measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Consequently, if η ∈ C∞ c ([0, ∞)) with supp(η) ⊂ [ 1 2, 3 2] such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) 1[ 3 4 , 5 4 ](r) ≤ η(r) ≤ 1[ 1 2 , 3 2 ](r), then for all ψ ∈ E(Rd) the support of (1 − η(|ψ|)) is of finite Lebesgue measure (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) Ld(supp(1 − η(|ψ|))) ≤ 1 4E(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The following inequality turns out to be handy for applications in the sequel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For any q ∈ [1, ∞) there exists Cq > 0 such that for all φ ∈ L1 loc(R2) with L2(supp(φ)) < +∞ and ∇φ ∈ L2(R2) it holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) ∥φ∥Lq(R2) ≤ Cq∥∇φ∥L2(R2) � L2(supp(φ) � 1 q , see for instance [16, Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For ψ ∈ E(R2), applying (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) to φ = ψ(1 − η(|ψ|)) yields ψ(1 − η(|ψ|)) ∈ Lq(R2) for any q ∈ [1, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, it suffices to check that ∇ (ψ(1 − η(|ψ|))) = (1 − η(ψ))∇ψ − η′(ψ)ψ∇|ψ| ∈ L2(R2) since (1 − η(ψ)) ∈ L∞(R2), ψη′(ψ) ∈ L∞(R2) as well as |∇|ψ|| ≤ |∇ψ| a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Under Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, the functional H(ψ), introduced in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9), is bounded for all ψ ∈ E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, 3 and f satisfying Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 one has E(Rd) ⊂ {ψ : |H(ψ)| < +∞} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In view of (K1) Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, it suffices to use a Taylor expansion of F in a small neighborhood O of 1 to show that there exist C, C′ > 0 such that F(|ψ|2) ≤ C′(|ψ|2 − 1)2 ≤ C(|ψ| − 1)2, for all x ∈ Rd such that |ψ|2 ∈ O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let δ > 0 such that B(1, δ) ⊂ O and ηδ(r) := η( r δ ) with η as in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) and ψ ∈ E(Rd), then � Rd F(|ψ|2)dx = � Rd F(|ψ|2)ηδ(|ψ|)dx + � Rd F(|ψ|2)(1 − ηδ(|ψ|))dx 12 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI ≤ C � Rd ||ψ| − 1|2 dx + C � Rd � 1 + |ψ|2α� ��|ψ|2 − 1 �� (1 − ηδ(|ψ|))dx, where we used (K2) Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To control the second term, we consider separately the cases d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 3, Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 yields that there exists c ∈ C with |c| = 1 and v ∈ Fc(R3) such that ψ = c + v and � R3 � 1 + |ψ|2α� ��|ψ|2 − 1 �� (1 − ηδ(|ψ|))dx ≤ C � Rd(1 − ηδ(|ψ|))χ(ψ)dx + � R3 |c + v|2(α+1)(1 − χ(ψ))dx ≤ CE(ψ) + ∥v∥2(1+α) L6 E(ψ) 2−α 3 ≤ C � E(ψ) + E(ψ) 5+2α 3 � , where we used (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) in the second last inequality and that 0 < α < 2 for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, one has that � R2 � 1 + |ψ|2α� ��|ψ|2 − 1 �� (1 − ηδ(|ψ|))dx ≤ C � Rd(1 − ηδ(|ψ|))χ(ψ)dx + � Rd � 1 + |ψ|2(α+1)� (1 − χ(ψ))dx The first integral is bounded by CE(ψ) and for the second it follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) that ∥ψ(1 − χ(|ψ|))∥2(α+1) L2(α+1)(R2) ≤ E(ψ)1+αL2(supp(ψ(1 − χ(ψ)))) ≤ E(ψ)2+α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This allows one to bound � R2(1 + |ψ|2α) ��|ψ|2 − 1 �� (1 − ηδ(ψ))dx ≤ E(ψ) + E(ψ)2+α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Next, we identify suitable conditions on f under which the converse inclusion, namely {ψ : |H(ψ)| < +∞} ⊂ E(Rd), holds true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we treat the particular case of the Gross-Pitaevskii equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) for which H(ψ) = EGL(ψ) and thus EGL(Rd) = {H(ψ) < +∞}, see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6) and (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It has been shown in [22], see also [23], that (EGL, dEGL) with (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12) dEGL(ψ1, ψ2) = ∥ψ1 − ψ2∥X1+H1 + ∥|ψ1|2 − |ψ2|2∥L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' is a complete metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It is pointed out in [16, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13] without proof that E = EGL with equivalence of the respective metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We provide a proof for the sake of completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d ≥ 1, then E(Rd) = EGL(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, for d = 2, 3 and any R > 0, there exists C = C(R) > 0 such that for any ψ1, ψ2 with E(ψi) ≤ R for i = 1, 2 it holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) 1 C dEGL(ψ1, ψ2) ≤ dE(ψ1, ψ2) ≤ CdEGL(ψ1, ψ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, there exists C > 0 such that for ψ1, ψ2 ∈ E(Rd)) and u, v ∈ H1(Rd) it holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14) dE(ψ1 + u, ψ2 + v) WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 13 ≤ C � 1 + � E(ψ1) + � E(ψ2) + ∥u∥H1 + ∥v∥H1 � (dE(ψ1, ψ2) + ∥u − v∥H1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 allows to infer the topological properties of (E, dE) from the results for (EGL(Rd), dEGL) in [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For instance, the functional E measures the distance to the circle of constants S1 = {ψ ∈ E : E(ψ) = 0} for d = 3 but not for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, it follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 and [22, Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3] that there exists A > 0 such that for every ψ ∈ E(R3), 1 AdE(ψ, S1)2 ≤ EGL(ψ) ≤ CdE(ψ, S1)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If d = 2, there exists a sequence {ψn} in E(R2) such that E(ψn) → 0 but dE(ψn, S1) ≥ c0 > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that the complete metric space (EGL(R2), dEGL) lacks an affine struc- ture and to be separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [23] a detailed characterisation of EGL(Rd) including a manifold structure for EGL(Rd) is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The connected components are characterised by [23, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8] and [23, Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A (strictly) weaker topology [23, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 140] induced by the metric d′ E(ψ1, ψ2) := ∥ψ1 − ψ2∥L2(B(1,0)) + ∥∇ψ1 − ∇ψ2∥L2(R2) + ∥|ψ1|2 − |ψ2|2∥L2(R2) is introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows that (E, d′ E) is connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Relying on the decomposition of elements of E provided by [23, Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8], one can show that (E, d′ E) is separable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If one only requires continuity of the solution map with respect to this weakened topology, the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 can be simplified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This metric has widely been used in the study of the stability of special solutions for d = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We refer to [47], where the authors introduce new energy spaces for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) and d = 1 in order to tackle global well-posedness in the energy space at Hs-regularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We start by showing that there exists C > 0 such that ∥|ψ1| − |ψ2|∥L2(Rd) ≤ C � ∥|ψ1|2 − |ψ|2∥L2(Rd) + ∥∇ψ1 − ∇ψ2∥L2(Rd) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, let χ6(z) = χ(6z) with χ defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), then ∥|ψ1| − |ψ2|∥L2(Rd) ≤ ∥|ψ1|χ6(ψ1)−|ψ2|χ6(ψ2)∥L2(Rd)+∥|ψ1|(1−χ6(ψ1))−|ψ2|(1−χ6(|ψ2|))∥L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The second contribution can be bounded by ∥|ψ1|(1 − χ6(ψ1)) − |ψ2|(1 − χ6(ψ2))∥L2(Rd) ≤ C∥|ψ1|2 − |ψ2|2∥L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Next, we notice that for i = 1, 2, the support of χ6(ψi) is of finite measure as ψi ∈ E(Rd), see (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, by invoking (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) applied to φ = |ψ1|χ6(|ψ1|) − |ψ2|χ6(|ψ2|), we conclude that ∥|ψ1|χ6(|ψ1|) − |ψ2|χ6(|ψ2|)∥L2(R2) ≤ C �� E(ψ1) + � E(ψ2) � � ∥ψ1 − ψ2∥X1+H1(R2) + ∥|ψ1|2 − |ψ2|2∥L2(R2) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 3, one proceeds similarly exploiting the decomposition ψi = ci + vi, vi ∈ Fc(R3) and Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It holds ∥|ψ1|χ6(|ψ1|) − |ψ2|χ6(|ψ2|)∥L2(R3) ≤ C � 1 + � E(ψ1) + � E(ψ2) � � |c1 − c2| + ∥∇v1 − ∇v2∥L2(R3) � ≤ C(R)dEGL(ψ1, ψ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 14 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Next, we show that there exists C = C(R) > 0 such that ∥|ψ1|2 − |ψ2|2∥L2(Rd) ≤ C1 � ∥|ψ1| − |ψ2|∥L2(Rd) + ∥ψ1 − ψ2∥X1+H1(Rd) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It suffices to notice that ∥|ψ1|2χ(ψ1) − |ψ2|2χ(ψ2)∥L2(Rd) ≤ C1∥|ψ1| − |ψ2|∥L2(Rd), while ∥|ψ1|2(1 − χ(ψ1)) − |ψ2|2(1 − χ(ψ2))∥L2(Rd) ≤ C � 1 + � E(ψ1) + � E(ψ2) + ∥ψ1,q∥L4(Rd) + ∥ψ2,q∥L4(Rd) � ∥ψ1,q − ψ2,q∥L4(Rd) ≤ 2C � 1 + � E(ψ1) + � E(ψ2) � ∥ψ1,q − ψ2,q∥L4(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the second last inequality, we used that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) |ψ|4� 1 − χ(ψ) ≤ C |ψq|4 , with ψq defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) which is only valid provided (1 − χ(ψ)) > θ for some small θ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, this is harmless as L2 ({x ∈ supp(1 − χ(ψ)) : 0 < 1 − χ(ψ) ≤ θ}) ≤ � E(ψ) and |ψ| ≤ 3 on the respective set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The error can be controlled at the expense of a factor � E(ψ) in the estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One has that ∥ψ1,q − ψ2,q∥L4(Rd) ≤ C �� E(ψ1) + � E(ψ2) � ∥ψ1 − ψ2∥X1+H1(Rd) by means of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) for d = 2 and the decomposition provided by Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, ∥|ψ1|2 − |ψ2|2∥L2(Rd) ≤ C � 1 + � E(ψ1) + � E(ψ2) � � ∥|ψ1| − |ψ2|)∥L2(Rd) + ∥ψ1 − ψ2∥X1+H1(Rd) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to show (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The respective property is known for dEGL, see [22, Lemma 2], and hence follows from the equivalence of metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, we provide a proof to track constants explicitly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that ∥|ψ1+u|−|ψ2+v|∥L2 ≤ ∥|ψ1+u|χ6(ψ1+u)−|ψ2+v|χ6(ψ2+v)∥L2+∥|ψ1+u|2−|ψ2+v|2∥L2, by arguing as in the first part of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By invoking (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11), one has ∥|ψ1 + u|χ6(ψ1 + u) − |ψ2 + v|χ6(ψ2 + v)∥L2 ≤ C �� E(ψ1) + � E(ψ2) + ∥u∥H1 + ∥v∥H1 � � ∥ψ1 − ψ2∥X1+H1(Rd) + ∥u − v∥H1� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the second term, one has ∥|ψ1 + u|2 − |ψ2 + v|2∥L2 ≤ ∥|ψ1|2 − |ψ2|2∥L2 + ∥|u|2 − |v|2∥L2 + ∥2 Re(ψ1u) − 2 Re(ψ2v)∥L2 ≤ ∥|ψ1|2 − |ψ2|2∥L2 + (∥u∥H1 + ∥v∥H1) ∥u − v∥H1 + 2∥ Re � (ψ1,∞ + ψ1,q)(u − v) � ∥L2 + 2∥ Re �� ψ1,q − ψ2,q + ψ1,∞ − ψ2,∞ � v � ∥L2 ≤ ∥|ψ1|2 − |ψ2|2∥L2 + (∥u∥H1 + ∥v∥H1 + 1 + E(ψ1)) ∥u − v∥H1 + ∥v∥H1dE(ψ1, ψ2) ≤ C � 1 + � E(ψ1 + � E(ψ2) + ∥u∥H1 + ∥v∥H1 � (dE(ψ1, ψ2) + ∥u − v∥H1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 15 □ Next, we provide a sufficient condition on f under which the space of functions with finite Hamiltonian energy is included in E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, we require Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 to be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' From F(1) = F ′(1) = f(1) = 0 and Taylor expansion it follows (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) F(r) ≃ 1 2f ′(1)(r − 1)2 in a small neighborhood of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, there exists δ > 0 such that for all r ∈ (1 − δ, 1 + δ) there exists C1, C2 > 0 such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) 1 C2 (|ψ| − 1)2 ≤ 1 C1 (|ψ|2 − 1)2 ≤ F(|ψ|2) ≤ C1(|ψ|2 − 1)2 ≤ C2(|ψ| − 1)2 provided that ||ψ|2 − 1| < δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The nonlinear potential F is locally convex in a neighborhood of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It was shown in [16, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8] that requiring in addition that the nonlinear potential is non-negative, namely F ≥ 0 and hence the Hamiltonian energy is sign-definite, implies that E = {H(ψ) < ∞}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that the condition F ≥ 0 is for instance satisfied for the pure power-type nonlinearities in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3 and Assumptions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If in addition F ≥ 0, then E = {H(ψ) < ∞}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, there exists an increasing function g : (0, ∞) → [0, ∞) with lim r→0 g(r) = 0 such that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) E(ψ) ≤ g (H(ψ)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By exploiting Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 and the conservation of the Hamiltonian along solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), it is then possible to extend the local solutions globally in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Notice that when in the framework of NLS equations with trivial far-field, the blow-up alternative is given in terms of the H1-norm, whereas here it involves E(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the classical, integrable case, it is possible to infer the analogue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) under less restrictive assumptions on F;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' for instance it is possible to consider mass-subcritical focusing nonlinearities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In this case indeed the analogue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) is derived by exploiting Gagliardo-Nirenberg inequalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, the lack of a suitable control of the mass in our case prevents us from considering more general nonlinearities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We sketch of the proof, see [16] for full details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we borrow from [16, Equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18)] the following equivalent definition of EGL(Rd) = E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ϕ ∈ C∞(R) be such that ϕ(r) = r for r ∈ [0, 2], 0 ≤ ϕ′ ≤ 1 on R and ϕ(r) = 3 for r ≥ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We define the modified Ginzburg-Landau energy EmGL(ψ) = � Rd |∇ψ|2 + 1 2 � ϕ(|ψ|)2 − 1 �2 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The functional EGL is well-approximated by EmGL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, it is shown in [16, Section 2] that EGL(Rd) = {ψ ∈ L1 loc(Rd) : ∇ψ ∈ L2(Rd), ϕ(|ψ|)2 − 1 ∈ L2(Rd)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Since |ϕ(|ψ|)2 − 1| ≤ 4||ψ| − 1|, one has ϕ(|ψ|)2 − 1 ∈ L2(Rd) if ψ ∈ E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the converse, see [16, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We sketch the main idea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' On the set where |ψ(x)| ≤ 2, one has ϕ(|ψ|)2 = |ψ|2 and hence the desired bound follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, Ld({x : ||ψ(x)| − 1| > 3 2}) < +∞ from the Chebychev inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) 16 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI if ϕ(|ψ|)2 − 1 ∈ L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By means of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) for d = 2 and Sobolev embedding for d = 3 one concludes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, there exists C > 0 and an increasing function m : R+ → R+ with lim r→0 m(r) = 0 such that 1 4EmGL(ψ) ≤ E(ψ) ≤ Cm (EmGL(ψ)) , see [16, Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, we note that it suffices to establish inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) for E replaced by EmGL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In virtue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17), it suffices to consider the region where {x : ||ψ| − 1| ≥ δ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If inf F > 0 on {x : ||ψ| − 1| ≥ δ}, then it is clear that � {||ψ|−1|≥δ} � ϕ(|ψ|)2 − 1 �2 dx ≤ C � {||ψ|−1|≥δ} F(|ψ|2)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows that E(ψ) can be controlled in terms of H(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More in general, provided that F ≥ 0, it follows from [16, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8] that for all ψ with |H(ψ)| < ∞ there exist C1 = C1(H(ψ)) > 0 and C2 = C2(H(ψ)) > 0 such that C1 (H(ψ)) ≤ EmGL(ψ) ≤ C2 (H(ψ)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The statement of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8 follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' System (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is closely related to the QHD system with non-trivial far-field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In a reminiscent analysis, the regularity and integrability properties of its unknowns (ρ, J) corresponding to the mass density ρ = |ψ|2 and momentum density J = Im(ψ∇ψ) are then captured in terms of Orlicz spaces, see [3] and [35, Chapter 2] as well as [2, 36] for the respective uniform bounds for solutions to the quantum Navier-Stokes equations, a viscous regularization of the QHD system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Smooth approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Elements of the energy space can be approxi- mated by smooth functions via convolution with a smooth mollifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ ∈ E(Rd), then there exists {ψn}n∈N ⊂ C∞(Rd)∩E(Rd) such that dE(ψ, ψn) → 0, as n → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, for any ψ ∈ E(Rd), there exists ϕ ∈ C∞ b (Rd) ∩ E(Rd) such that ∇ϕ ∈ H∞(Rd) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='19) ψ − ϕ ∈ H1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The first statement is proven in [22, Lemma 6] by considering the convolution with a standard mollification kernel and the second statement follows from [21, Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In [22, 21], the statements are given for (EGL, dEGL) being equiv- alent to (E, dE) by virtue of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Action of the linear propagator on the energy space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The action of the linear Schr¨odinger group on the space Xk(Rd) + Hk(Rd) is well-defined, see [22, Lemma 3] and also [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While the results in [22, 23] are stated for (EGL, dEGL), we state them (E, dE) which by Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 is equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11 ([22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d be a positive integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For every k, for every t ∈ R, the operator e i 2 t∆ maps Xk(Rd) + Hk(Rd) into itself and it satisfies (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='20) ∥e i 2 t∆f∥Xk+Hk ≤ C (1 + t) 1 2 ∥f∥Xk+Hk, and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) ∥e i 2 t∆f − f∥L2 ≤ C|t| 1 2 ∥∇f∥L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 17 Moreover, if f ∈ Xk(Rd) + Hk(Rd), the map t ∈ R �→ e i 2 t∆f ∈ Xk(Rd) + Hk(Rd) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 1, we notice that Xk(R) + Hk(R) ⊂ Xk(R) for any k positive integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The action of e i 2 t∆ on X1(R) has been studied in [61, 63], see also [20] for the action of the linear propagator on Zhidkov spaces Xk(Rd) with d > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The action of the linear Schr¨odinger group on the space E(Rd) is described by [22, Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12 ([22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For every t ∈ R, the linear propagator e i 2 t∆ maps E(Rd) to itself and for every ψ ∈ E(Rd) the map t ∈ R �→ e i 2 t∆ψ0 ∈ E(Rd) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, given R > 0, T > 0 there exists C > 0 such that for every ψ1 0, ψ2 0 ∈ E(Rd) with E(ψ1 0) ≤ R, E(ψ2 0) ≤ R one has (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22) sup |t|≤T dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) ≤ CdE(ψ1 0, ψ2 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, given R > 0, there exists T (R) > 0 such that, for every ψ0 ∈ E(Rd) with E(ψ0) ≤ R, we have (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) sup |t|≤T (R) E(e i 2 t∆ψ0) ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3 and ψ0 ∈ E(Rd), then (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) lim t→0 e i 2 t∆ψ0 − ψ0 t = − i 2∆ψ0 in H−1(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, e i 2 t∆ψ0 ∈ C(R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) ∩ C1(R, H−1(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that e i 2 t∆ψ0 − ψ0 ∈ L2(Rd) for any finite time t ∈ R by virtue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For any φ ∈ H1(Rd), it follows from Plancherel’s identity and the dominated convergence theorem that lim t→0 � Rd e i 2 t∆ψ0 − ψ0 t φ(x)dx = lim t→0 � Rd e i 2 t|ξ|2 ˆψ0 − ˆ ψ0 t ˆφ(ξ)dξ = lim t→0 � Rd i 2|ξ|2 �� 1 0 eits|ξ|2� ˆ ψ0(ξ)ˆφ(ξ)dξ = � Rd(− i 2∆ψ0(x))φ(x)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The identity (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Strichartz estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We say that a pair (q, r) is (Schr¨odinger) admissible if q, r ≥ 2 such that 2 q + d r = d 2, (q, r, d) ̸= (2, ∞, 2), and we recall the well-known Strichartz estimates, see [40] and references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2, 3 and (q, r) be an admissible pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then the linear propagator satisfies, ∥e i 2 t∆u∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(Rd)) ≤ C∥u∥L2(Rd), and for any (q1, r1) admissible pair one has (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) ���� � t 0 e i 2 (t−s)∆f(s)ds ���� Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(Rd) ≤ C∥f∥Lq′ 1([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr′ 1(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 18 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Given a time interval I = [0, T ], it is convenient to introduce the Strichartz space S0(I × Rd) characterised by the norm ∥u∥S0 := sup (q,r)admissible ∥u∥Lq(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We notice that since (q, r) = (∞, 2) is admissible one has (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='26) ∥u∥C(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(Rd)) ≲ ∥u∥S0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, we introduce the dual space N 0 = (S0(I × Rd))∗ satisfying the estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='27) ∥f∥N 0 ≲ ∥f∥Lq′ 1(I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr′ 1(Rd)), for any admissibile pair (q1, r1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, in order to discuss the well-posedness theory for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) in the energy space, we also work with the function space S1(I×Rd) and N 1(I × Rd) defined by the norms (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='28) ∥u∥S1 = ∥u∥S0 + ∥∇u∥S0, ∥G∥N 1 = ∥G∥N 0 + ∥∇G∥N 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' While ψ ̸∈ S0 for any solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) to l in any Strichartz space S0, it will turn out that the nonlinear flow belongs to S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let T > 0 and ψ0 ∈ E(Rd), then Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14 states that for any admissible pair (q, r) it holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29) ���e i 2 t∆∇ψ0 ��� Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(Rd)) ≤ ∥∇ψ0∥L2(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In virtue of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11, one has e i 2 t∆ψ0 − ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(Rd)) and ∇e i 2 t∆ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(Rd)) ∩ S0([0, T ] × Rd)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We collect some properties of the nonlinearity N(ψ) = f(|ψ|2)ψ, with f satisfying Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, that will be used in the sequel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By applying smooth cut-off functions, we separate the behavior close and away from |ψ| = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let η ∈ C∞ c (R+) be given by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9), we define (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='30) N1(ψ) := N(ψ)η(|ψ|), N2(ψ) := N(ψ)(1 − η(|ψ|)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By means of the cut-off χ defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), we further split N2 as (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='31) N2,∞ = N2(ψ)χ(2ψ), N2,q(ψ) = N2(ψ)(1 − χ(2ψ)) and notice that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) |N1(ψ)| ≤ C ||ψ| − 1| , |N2,∞(ψ)| ≤ C(1 − η(|ψ|), |N2,q(ψ)| ≤ C|ψ|2α+1(1 − χ(ψ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the case of vanishing boundary conditions and infinity, the strategy developed in [38], see also [13, Chapter 4], relies on similar pointwise bounds on N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, here we need to consider additional cut-off functions η isolating the behavior close to 1 in view of the far-field and the related support properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) yields that the measure of supp(N2(ψ)) is bounded by E(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The quantity ∇N can be rigorously defined by means of Nemicki operators, see [38, Appendix A] and also [39, 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It reads (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='33) ∇N(ψ) = � f(|ψ|2) + f ′(|ψ|2)|ψ|2� ∇ψ + f ′(|ψ|2)ψ2∇ψ, so that we have (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34) |∇N(ψ)| ≲ (|f(ρ) + ρf ′(ρ)| + |ρf ′(ρ)|) |∇ψ|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 19 Inequalities (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34) will allow us to infer bounds on the nonlinearity in the Strichartz space N 1 defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, (K2) of Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 implies that the nonlinearity N(ψ) is locally Lipschitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='35) |N(ψ1) − N(ψ2)| ≤ C � 1 + |ψ1|2α + |ψ2|2α� |ψ1 − ψ2|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For general ψ1, ψ2 ∈ E(Rd) one has ψ1 − ψ2 /∈ Lp(Rd) for any p ≥ 1, unless ψ1, ψ2 belong to the same connected component of E(Rd), see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 for d = 2, 3 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This motivates the following estimates, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36) |N1(ψ1) − N1(ψ2)| ≤ C|ψ1| ||ψ1| − |ψ2|| + ||ψ2| − 1| η(|ψ2|)|ψ1 − ψ2|, |N2,∞(ψ1) − N2,∞(ψ2)| ≤ C |ψ1 − ψ2| , |N2,q(ψ1) − N2,q(ψ2)| ≤ C � |ψ1|2α + |ψ2|2α� |ψ1 − ψ2| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inequalities (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36) will then lead to respective bounds in Strichartz space N 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Similarly, we introduce the following estimates for ∇N(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One has ∇N(ψ) = DN(ψ) · �∇ψ ∇ψ � = �G1(ψ) G2(ψ) �T �∇ψ ∇ψ � , where (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='37) G1(ψ) = f(|ψ|2) + f ′(|ψ|2)|ψ|2, G2(ψ) = f ′(|ψ|2)ψ2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We define Gi,∞(ψ) := Gi(ψ)χ(ψ), Gi,q(ψ) := Gi(ψ)(1 − χ(ψ)), for i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the sake of a shorter notation we introduce (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='38) G∞ := G1,∞ + G2,∞, Gq := G1,q + G2,q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular we observe that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 yields that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='39) |G∞(ψ)| ≤ C, |Gq(ψ)| ≤ C(1 + |ψq|2α)(1 − χ(ψ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2D well-posedness Local well-posedness for energy sub-critical nonlinearities is proven by a pertur- bative method in the spirit of Kato [38] adapted to the non-trivial farfield behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Subsequently, we prove global well-posedness in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local well-posedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we provide necessary a priori bounds on the nonlinearity N(ψ) in the Strichartz norms for ψ ∈ E(R2) that will follow from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We notice that (q1, r1) = ( 2(α+1) α , 2(α+ 1)) is Strichartz admissible and one has (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) (q′ 1, r′ 1) = �2(α + 1) α + 2 , 2(α + 1) 2α + 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall that the space N 0 is defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='27).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It suffices to consider positive times of existence as the analogue statements for negative times follow from the time reversal symmetry of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For ψ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) we denote (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) ZT := ∥∇ψ∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) + ∥|ψ| − 1∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) and note that ZT (ψ) ≤ 2 supt∈[0,T ] � E(ψ)(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The quantity ZT (ψ) can be thought of as analogue of the L∞ t H1 x−norm for nonlinear Schr¨odinger equations with van- ishing conditions at infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 20 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let the nonlinearity f be such that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is satisfied, T > 0, the pair (q′ 1, r′ 1) as in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) and ψ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)), then the following hold (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) ∥N(ψ)∥L1([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ CT � ZT (ψ) + ZT (ψ)1+2α� , and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) ∥∇N(ψ)∥N 0([0,T ]×R2) ≤ C � T + T 1 q′ 1 ZT (ψ)2α � ∥∇ψ∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Furthermore, given ψ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) and u, v ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)), one has that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) ∥N(ψ + u) − N(ψ + v)∥N 0([0,T ]×R2) ≤ C � T + T 1 q′ 1 � ZT (ψ + u)2α + ZT (ψ + v)2α� � ∥u − v∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ ∈ E(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To infer (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3), we observe that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) implies ∥N1(ψ)∥L1 tL2 x ≤ CT ∥|ψ| − 1∥L∞ t L2x ≤ CT ZT(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To obtain the bound of N2(ψ), we note that the Chebychev inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) yields that supp(1 − η(ψ)) is of finite Lebesgue measure for all ψ ∈ E(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows then from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) that ∥N2,∞(ψ)∥L1 tL2x ≤ CT L2 (supp(1 − η(|ψ|)) 1 2 ≤ CT ZT(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By exploiting that supp(1 − η(ψ)) ⊂ supp(1 − χ(ψ)) for ψ ∈ E(R2) and by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8), we bound the third contribution as ∥N2,q(ψ)∥L1 tL2x ≤ C∥|ψ|2α|ψ|(1 − χ(ψ))∥L1 tL2x ≤ CT ZT(ψ) + CT ∥ψq∥1+2α L∞ t L2(1+2α) x ≤ CT � ZT (ψ) + ZT (ψ)1+2α� , where ψq is defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2), with χ given in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the second last inequality, we used that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6) |ψ|2α+1(1 − χ(ψ)) ≤ C � 1{0<1−χ(ψ)≤1/4} + |ψq|2α+1� , and L2 ({x ∈ supp(1 − χ(ψ)) : 0 < 1 − χ(ψ) ≤ 1/4}) ≤ ZT (ψ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To control ∇N(ψ), we observe that by using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34) and decomposing ψ = ψ∞ +ψq, see (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2), it follows ∥∇N(ψ)∥ L1 tL2 x+L q′ 1 t L r′ 1 x ≤ CT ∥∇ψ∥L∞ t L2x + ∥|ψq|2α∇ψ∥ L q′ 1 t L r′ 1 x ≤ C � T + T 1 q′ 1 ZT (ψ)2α � ∥∇ψ∥L∞ t L2 x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to show (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) and u, v ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='35) implies the pointwise bound |N(ψ + u) − N(ψ + v)| ≤ C � 1 + |ψ + u|2α + |ψ + v|2α� |u − v|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Exploiting that E(R2) + H1(R2) ⊂ E(R2) from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, we proceed as before to infer that for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' t ∈ [0, T ] it holds ∥|ψ + u|2α∥L∞ x +Lq1 x + ∥|ψ + v|2α∥L∞ x +Lq1 x ≤ C � 1 + ZT (ψ + u)2α + ZT (ψ + v)2α� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows that WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 21 ∥N(ψ + u) − N(ψ + v)∥ L1 tL2x+L q′ 1 t L r′ 1 x ≤ C � T + T 1 q1 ′ � ZT(ψ + u)2α + ZT (ψ + v)2α�� ∥u − v∥L∞ t L2x, yielding (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ With the bounds of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and the Strichartz estimates of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14 at hand, we are able to prove existence and uniqueness of solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, we consider the equivalent Duhamel formula (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) ψ(t) = e i 2 t∆ψ0 − i � t 0 e i 2 (t−s)∆N(ψ)(s)ds which is justified as identity in E(R3) in virtue of the properties of the free solutions from Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12 and the fact the non-homogeneous terms is bounded in L∞ t H1 x by means of the Strichartz estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We anticipate that the continuous dependence on the initial data differs sig- nificantly from the classical approach as consequence of the low regularity of the nonlinearity N combined with the lack of integrability of ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The constructed solu- tions are such that ψ(t)−ψ0 ∈ H1(R2) for all t and hence (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) suffices to show local existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that in order to show the continuous dependence on the initial data (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) is not sufficient as in general different initial data possesses different far-field behavior, namely belongs to different connected components of E, see also Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3 upgrades (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) to the respective inequality for general initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The following Proposition is stated for positive existence times, the analogous statement for negative times follows by exploiting the time reversal symmetry of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 2 and f be such that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, (1) for any ψ0 ∈ E(R2), there exists T = T (E(ψ0)) > 0 and a unique strong solution ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with ψ(0) = ψ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, ψ − ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2) there exists a maximal existence time T ∗ = T ∗(ψ0) > 0, such that ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) and the blow-up alternative holds, namely if T ∗ < ∞ then lim tրT ∗ E(ψ)(t) = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (3) for any ψ∗ 0 ∈ E(R2) there exists a open neighborhood O ⊂ E(R2) of ψ∗ 0 such that T ∗(O) = inf ψ0∈O T ∗(ψ0) > 0, and the map ψ0 ∈ O �→ ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) is continuous for all 0 < T < T ∗(O).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, let Or = {ψ0 ∈ E(R2) : dE(ψ∗ 0, ψ0) < r}, then lim inf r→0 T ∗(Or) ≥ T ∗(ψ∗ 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Point (1) of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 is included in (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Nevertheless, it is stated separately as it proves useful for the proof of continuous dependence property in (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We note that ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) is a strong solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ0 ∈ E(R2) iff ψ(t) = e i 2 t∆ψ0 − i � t 0 e i 2 (t−s)∆N(ψ)(s)ds 22 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To show existence of a solution ψ it suffices to implement a fixed-point argument for the solution map (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) Φ(u)(t) = i � t 0 e i 2 (t−s)∆N(e i 2 s∆ψ0 + u(s))ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, ψ(t) = e i 2 t∆ψ0 + u(t) satisfies ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) if u ∈ XT and ψ0 ∈ E(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12 that e i 2 t∆ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) and Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 yields that e i 2 t∆ψ0 + u ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If u is a fixed-point of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) then ψ = e i 2 t∆ψ0 + u is a local strong solution of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ0 ∈ E and R > 0 such that E(ψ0) ≤ R and given M > 0 and T > 0, we consider the solution map (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) defined on the function space XT = � u ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)) : u(0) = 0, ∥u∥XT ≤ M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For u, v ∈ XT , we introduce the distance function d as dX(u, v) = ∥u − v∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It is straightforward to verify that the space (XT , dX) is a complete metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If E(ψ0) ≤ R and u ∈ XT , then thanks to the Minkowski inequality and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) we obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) ZT (e i 2 t∆ψ0 + u) ≤ ZT (e i 2 t∆ψ0) + ∥u∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='H1(R2)) ≤ 2 √ 2R + M, provided that T > 0 sufficiently small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Next, we show that Φ defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) maps XT onto XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let u ∈ XT and denote ψ = e i 2 t∆ψ0 + u, then by virtue of the Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) we obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) ∥Φ(u)∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ∥N(ψ)∥L1([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ CT � ZT (ψ) + ZT (ψ)1+2α� ≤ CT � 1 + (2 √ 2R + M)2α� (2 √ 2R + M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To bound ∇Φ(u), we apply the Strichartz estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) concatenated with (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) to obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) ∥∇Φ(u)∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ C∥∇N(ψ)∥N 0([0,T ]×R2) ≤ C � T + T 1 q′ 1 ZT (ψ)2α � ∥∇ψ∥L∞ t L2x ≤ C � T + T 1 q′ 1 (2 √ 2R + M)2α � (2 √ 2R + M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We conclude that Φ(u) ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)), and summing up (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11), we obtain that ∥Φ(u)∥XT ≤ C � T + T 1 q′ 1 (2 √ 2R + M)2α � (2 √ 2R + M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Next, we check that the map Φ defines a contraction on (XT , dX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let u1, u2 ∈ XT and denote ψ1 = e i 2 t∆ψ0 + u1, ψ2 = e i 2 t∆ψ0 + u2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 23 Upon applying (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) followed by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) one has dX(Φ(u1), Φ(u2)) = ����−i � t 0 e i 2 (t−s)∆ (N(ψ1) − N(ψ2)) (s)dx ���� L∞([0,T ],L2(R2)) ≤ C ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) ≤ C � T + T 1 q′ 1 (2 √ 2R + M)2α � dX(u1, u2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We fix M = √ 2R and notice that there exists 0 < T ≤ 1 sufficiently small such that C � T + T 1 q′ 1 (3 √ 2R)2α � ≤ 1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, Φ maps XT onto XT and defines a contraction on XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Banach fixed- point Theorem yields a unique u ∈ XT such that e i 2 t∆ψ0 + u is solution to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) that e i 2 t∆ψ0 + u ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, ψ − ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)) from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) and u ∈ XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Uniqueness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ1, ψ2 ∈ C([0, T ], E(R2)) be two solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ1(0) = ψ2(0) = ψ0 ∈ E(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One has that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='12) ψ1(t) − ψ2(t) = −i � t 0 e i 2 (t−s)∆ (N(ψ1) − N(ψ2)) (s)ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, as the nonlinear terms are bounded in L∞ t H1 x(R2), one has ψ1 − ψ2 ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For (q′ 1, r′ 1) given by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), the Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) together with (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) then yields ∥ψ1 − ψ2∥L∞ t L2x ≤ C∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) ≤ C � T + T 1 q′ 1 � ZT (ψ1)2α + ZT (ψ2)2α�� ∥ψ1 − ψ2∥L∞ t L2x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, we deduce that there exists T1 > 0 such that ψ1 = ψ2 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T1] × R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As T1 only depends on ZT (ψ1), ZT (ψ2), one may iterate the argument to obtain uniqueness of the solution on the interval [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Blow-up alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ0 ∈ E(R2) and define T ∗(ψ0) = sup {T > 0 : there exists a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) on [0, T ]} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let T ∗(ψ0) < +∞ and assume that there exist R > 0 and a sequence {tn}n∈N such that tn → T ∗(ψ0) and E(ψ(tn)) ≤ R for all n ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists n sufficiently large such that the local existence statement allows us to uniquely extend the solution to [0, tn + T (R)] with tn + T (R) > T ∗(ψ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This violates the maximality assumption and we conclude that E(ψ(tn)) → ∞, as tn → T ∗(ψ0), if T ∗(ψ0) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof of the continuous dependence on the initial data of the solution requires some auxiliary statements and is postponed after Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ We introduce estimates on the nonlinear flow in Strichartz norms that are re- quired for the proof of the continuous dependence on the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The estimates used for the local existence and uniqueness in the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 are not sufficient since they only allow to control the difference of solutions ψ1, ψ2 provided that ψ1 − ψ2 ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In addition, as the regularity properties of N 24 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI do not suffice to control ∥∇Φ(ψ1) − ∇Φ(ψ2)∥L∞ t L2x for ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)), we need to rely on a auxiliary metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f satisfy Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, T > 0, (q′ 1, r′ 1) as defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) and ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists θ ∈ (0, 1] such that ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R2) ≤ CT θ � 1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� × � ∥|ψ1| − |ψ2|∥L2([0,T ]];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) + ∥ψ1 − ψ2∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L∞+L2(R2)) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we notice that it follows from the first inequality of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36) and the decomposition provided by Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 that ∥N1(ψ1) − N1(ψ2)∥ L1 tL2 x+L 4 3 t L 4 3 x ≤ C � T 1 2 + T 1 4 ZT (ψ1) � ∥|ψ1| − |ψ2|∥L2 tL2x + CT 1 2 (1 + ZT (ψ2))∥ψ1 − ψ2∥L∞ t (L∞ x +L2x), where we used that ||ψ2| − 1|η(|ψ2|) ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L∞(R2) ∩ L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, let Ω ⊂ R2 of finite Lebesgue measure and f ∈ L∞(Ω) + Lp(Ω), then ∥f∥Lp(Ω) ≤ C � 1 + L2(Ω) 1 p � ∥f∥Lp(Ω)+L∞(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, we observe that L2(supp(N2(ψi))) ≤ E(ψi) for i = 1, 2 from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' From (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36), we conclude ∥N2,∞(ψ1) − N2,∞(ψ2)∥L1 tL2x ≤ CT (1 + ZT (ψ1) + ZT (ψ2)) ∥ψ1 − ψ2∥L∞ t (L∞ x +L2 x) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Third, arguing as in the proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and exploiting that L2(supp(N2(ψi))) ≤ E(ψi) we obtain ∥N2,q(ψ1)−N2,q(ψ2)∥ L1 tL2x+L q′ 1 t L r′ 1 x ≤ ��1supp(1−χ(ψ1))∪supp(1−χ(ψ2))|ψ1 − ψ2| �� L1 tL2 x + ��� |ψ1,q|2α + |ψ2,q|2α� |ψ1 − ψ2| �� L q′ 1 t L r′ 1 x ≤ C(T +T 1 q′ 1 ) � ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� ∥ψ1−ψ2∥L∞ t (L∞ x +L2x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Concatenating the Strichartz estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3 gives the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) such that ZT (ψi) ≤ M for i = 1, 2, there exist C = C(M) > 0 and θ ∈ (0, 1] such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) ∥Φ(ψ1)−Φ(ψ2)∥S0([0,T ]×R2) ≤ CMT θ � ∥ψ1 − ψ2∥L∞ t (L∞ x +L2x) + ∥|ψ1| − |ψ2|∥L2 tL2x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We are now in position to complete the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that the metric space (E, dE) is not separable, see also Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, it is not sufficient to show sequential continuity of the solution map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 continued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We prove continuous dependence on the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given ψ∗ 0 ∈ E(R2), let R := E(ψ∗ 0) and r ∈ (0, √ R].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Denote (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14) Or := {ψ0 ∈ E(R2) : dE(ψ∗ 0, ψ0) < r}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 25 If follows that E(ψ0) ≤ 4E(ψ∗ 0) for all ψ0 ∈ Or.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The first statement of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 then yields that there exists T = T (4E(ψ∗ 0)) > 0 such that for all ψ0 ∈ Or there exists a unique strong solution ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, for ψ0 ∈ Or the maximal time satisfies T ∗(ψ0) ≥ T (4E(ψ∗ 0)) > 0 by virtue of the blow-up alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, T ∗(Or) = inf ψ0∈Or T ∗(ψ0) ≥ T (4E(ψ∗ 0)) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given δ > 0 to be chosen later, let Oδ be defined as in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let us remark (again) that, for any ψ0 ∈ Oδ, we have E(ψ0) ≤ 2(R + δ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, T ∗(ψ0) ≥ T ∗(Oδ) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ1 0, ψ2 0 ∈ Oδ and denote by ψ1, ψ2 the respective solutions defined at least up to time T ∗(Oδ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For any 0 < T < T ∗(Oδ) there exists M = M(T ) > 0 such that ZT (ψ1) + ZT (ψ2) ≤ M, by virtue of the blow-up alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' From (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22), we have that there exists C = C(R, δ, T ) > 0 such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) sup t∈[0,T ] dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) ≤ CdE(ψ1 0, ψ2 0) ≤ 2Cδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To prove continuous dependence of the solution, we proceed in the following four steps that are necessary in order to compensate for the lack of Lipschitz regularity of ∇N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (1) There exist C > 0 and 0 < T1 < T ∗(Oδ), only depending on M such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) ∥ψ1 − ψ2∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L∞+L2(R2)) + ∥|ψ1| − |ψ2|∥L2([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ CdE(ψ1 0, ψ2 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2) Provided (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) holds, for all ε > 0 there exist T2 = T2(M) > 0 and δ > 0 such that dE(ψ1 0, ψ2 0) < δ implies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) ∥∇ψ1 − ∇ψ2∥L∞([0,T2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (3) Provided (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) hold, for all ε > 0 there exists δ > 0 such that dE(ψ1 0, ψ2 0) < δ implies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) sup t∈[0,T2] dE(ψ1(t), ψ2(t)) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (4) The estimate (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) implies that for all 0 < T < T ∗(Oδ) and ε > 0, there exists δ > 0 such that dE(ψ1 0, ψ2 0) < δ yields (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='19) sup t∈[0,T ] dE(ψ1(t), ψ2(t)) < ε, Step 1 We show (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let us consider the first term on the left hand side of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16), by using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) and from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4, we know there exists θ > 0 such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='20) ∥ψ1 − ψ2∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L∞+L2(R2)) ≤ ∥e i 2 t∆ψ1 0 − e i 2 t∆ψ2 0∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L∞+L2(R2)) + ∥Φ(ψ1) − Φ(ψ2)∥L∞([0,T ],L2(R2)) ≤ CdE(ψ1 0, ψ2 0) + CMT θ � ∥ψ1 − ψ2∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='(L∞+L2(R2)) + ∥|ψ1| − |ψ2|∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 26 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Given χ defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), we define χ6(z) := χ(6z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Arguing as in the proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6 we notice that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) ∥|ψ1| − |ψ2|∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ��|ψ1|2 − |ψ2|2�� L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) + ∥ψ1χ6(ψ1) − ψ2χ6(ψ2)∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) To deal with the first contribution on the right-hand side, we notice that ��|ψ1|2 − |ψ2|2�� ≤ ���|e i 2 t∆ψ1 0|2 − |e i 2 t∆ψ2 0|2��� + ���2 Re � e− i 2 t∆ψ2 0 (Φ(ψ2) − Φ(ψ1)) ���� + ���2 Re � e− i 2 t∆(ψ2 0 − ψ1 0)Φ(ψ1) ���� + (|Φ(ψ1)| + |Φ(ψ2)|) |Φ(ψ1) − Φ(ψ2)| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We control these four terms separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15), one has that ���|e i 2 t∆ψ1 0|2 − |e i 2 t∆ψ2 0|2��� L2 tL2x ≤ CT 1 2 dE(ψ1 0, ψ2 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, upon splitting e i 2 t∆ψi 0 ∈ E(R2) as in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) we have ���2 Re � e− i 2 t∆ψ2 0 (Φ(ψ2) − Φ(ψ1)) ���� L2 tL2x ≤ T 1 2 ∥Φ(ψ2) − Φ(ψ1)∥L∞ t L2x + T 1 4 ZT (e i 2 t∆ψ2 0)∥Φ(ψ2) − Φ(ψ1)∥L4 tL4x ≤ CM � T 1 2 ∥Φ(ψ2) − Φ(ψ1)∥L∞ t L2x + T 1 4 ∥Φ(ψ1) − Φ(ψ2)∥L4 tL4 x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Third, proceeding similarly and exploiting (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15) we have ���2 Re � e− i 2 t∆(ψ2 0 − ψ1 0)Φ(ψ1) ���� L2 tL2x ≤ C � T 1 2 ∥Φ(ψ1)∥L∞ t L2x + T 1 4 ∥Φ(ψ2)∥L4 tL4 x � dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) ≤ C � T 1 2 ∥Φ(ψ1)∥L∞ t L2x + T 1 4 ∥Φ(ψ1)∥L4 tL4x � dE(ψ1 0, ψ2 0) ≤ C(T 1 2 + T 1 4 )(M + M 1+2α)dE(ψ1 0, ψ2 0), where we used that Φ(ψ1) ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)) ∩ L4([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L4(R2)) from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fourth, one has ∥ (|Φ(ψ1)| + |Φ(ψ2)|) |Φ(ψ2) − Φ(ψ1)| ∥L2 tL2x ≤ � ∥Φ(ψ1)∥L4 tL4x + ∥Φ(ψ2)∥L4 tL4x � ∥Φ(ψ1) − Φ(ψ2)∥L4 tL4x ≤ CT � M + M 1+2α� ∥Φ(ψ1) − Φ(ψ2)∥L4 tL4x, where we used (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Combining the previous inequalities, we infer that there exists θ1 > 0 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22) ��|ψ1|2 − |ψ2|2�� L2 tL2x ≤ CT θ1 � 1 + M + M 1+2α� × � dE(ψ1 0, ψ2 0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ t L2x + ∥Φ(ψ1) − Φ(ψ2)∥L4 tL4 x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The second contribution in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) is bounded as follows (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) ∥ψ1χ6(ψ1) − ψ2χ6(ψ2)∥L2 tL2x ≤ CT 1 2 (1 + M) dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) + CT 1 2 ∥Φ(ψ1) − Φ(ψ2)∥L∞ t L2 x ≤ CT 1 2 (1 + M) � dE(ψ1 0, ψ2 0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ t L2x � , WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 27 where we exploited that for ψ ∈ E(R2) the measure of the support of χ6(ψ) is bounded by E(ψ), see (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) that there exists θ2 > 0 such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) ∥|ψ1| − |ψ2|∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ CT θ2 � 1 + M + M 1+2α� × � dE(ψ1 0, ψ2 0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞L2 + ∥Φ(ψ1) − Φ(ψ2)∥L4 tL4 x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Summing up (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='20) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) and applying (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) yields that there exists θ > 0 such that ∥ψ1 − ψ2∥L∞ t (L∞ x +L2 x) + ∥|ψ1| − |ψ2|∥L2 tL2 x ≤ CMT θ × � dE(ψ1 0, ψ2 0) + CMT θ � ∥ψ1 − ψ2∥L∞ t (L∞ x +L2x) + ∥|ψ1| − |ψ2|∥L2 tL2x �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For T1 > 0 sufficiently small, only depending on M, inequality (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that ∇ψ1 − ∇ψ2 = e i 2 t∆ � ∇ψ1 0 − ∇ψ2 0 � − i � t 0 e i 2 (t−s)∆ (∇N(ψ1) − ∇N(ψ2)) (s)ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We estimate the difference of the free solutions by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) ���e i 2 t∆ � ∇ψ1 0 − ∇ψ2 0 ���� L∞([0,T ],L2(R2)) ≤ dE(ψ1 0, ψ2 0), exploiting that e i 2 t∆ is an isometry on L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='33) that ∇N(ψ) = � f(|ψ|2) + f ′(|ψ|2)|ψ|2� ∇ψ + f ′(|ψ|2)ψ2∇ψ, which can be bounded by means of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34) as |∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ| ≤ C(1 + |ψq|2α)|∇ψ|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We apply estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) to the non-homogeneous term, where (q1, r1)) = ( 2(α+1) α , 2(α+ 1)), see also (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We decompose ∇N(ψ1) − ∇N(ψ2) by means of the functions G∞, Gq defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='38) leading to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='26) ����i � t 0 e i 2 (t−s)∆ (∇N(ψ1) − ∇N(ψ2)) (s)ds ���� L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ∥(G∞ + Gq)(ψ2) |∇ψ1 − ∇ψ2|∥N 0 + ∥((G∞ + Gq)(ψ1) − (G∞ + Gq)(ψ2)) |∇ψ1|∥N 0([0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='T ]×R2)) ≤ ∥∇ψ1 − ∇ψ2∥L1 tL2x + ∥|ψ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q|2α |∇ψ1 − ∇ψ2| ∥ L q′ 1 t L r′ 1 x + ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 tL2 x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ L q′ 1 t L r′ 1 x ≤ C � T + T 1 q′ 1 ZT (ψ1)2α) � ∥∇ψ1 − ∇ψ2∥L∞ t L2x + ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 tL2 x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ L q′ 1 t L r′ 1 x Thus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' for T2 = T2(M) > 0 sufficiently small so that C � T2 + T 1 q′ 2 ZT (ψ2)2α � ≤ C � T2 + T 1 q′ 2 M 2α � ≤ 1 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' we conclude by combining (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='26) that 28 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI ∥∇ψ1 − ∇ψ2∥L∞([0,T2],L2(R2)) ≤ dE(ψ1 0, ψ2 0) + ∥(G∞(ψ1) − G∞(ψ2)) |∇ψ1|∥L1 tL2x + ∥(Gq(ψ1) − Gq(ψ2)) |∇ψ1|∥ L q′ 1 t L r′ 1 x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In order to conclude Step 2, we need to show that the second line above can be made arbitrarily small by choosing a sufficiently small δ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We proceed by contradiction, assuming that there exist ε > 0, a sequence {δn}n∈N and {ψn 0 }n∈N ⊂ E(R2) such that dE(ψ1 0, ψn 0 ) < δn → 0 and for all n sufficiently large, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='27) ∥(G∞(ψ1) − G∞(ψn)) |∇ψ1|∥L1 tL2x+∥(Gq(ψ1) − Gq(ψn)) |∇ψ1|∥ L q′ 1 t L r′ 1 x ≥ ε, where ψn ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) denotes the unique maximal solution with ψn(0) = ψn 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inequality (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) implies that, up to extracting a subsequence, not relabeled, ψn converges to ψ1 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T1] × R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If 0 < T1 < T2, then set T2 := T1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By virtue of Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 on f, it follows that G∞, Gq are continuous and thus |(G∞(ψ1) − G∞(ψn))| |∇ψ1| → 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' in [0, T2] × R2, |Gq(ψ1) − Gq(ψn)| |∇ψ1| → 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' in [0, T2] × R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Since in addition one has ∥Gq(ψn)∥L∞ t Lq1 x (R2) ≤ C ��(1 + |ψq,n|2α)(1 − χ(ψn) �� L∞ t Lq1 x (R2) ≤ C � ZT (ψn) + ZT (ψn)2α� ≤ C � M + M 2α� for all n ∈ N, we obtain from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) that there exists φ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lr1(R2)) such that |ψq,n| ≤ φ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T2) × R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, |(G∞(ψ1) − G∞(ψn))| |∇ψ1| ≤ C|∇ψ1| ∈ L1([0, T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)), |(Gq(ψ1) − Gq(ψn))| |∇ψ1| ≤ C � |ψ1|2α + |φ|2α� |∇ψ1| ∈ Lq′ 1([0, T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lr′ 1(R2)), so that the dominated convergence Theorem then implies that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='27) is violated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The inequality (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) follows for the time interval [0, T2] where we stress that T2 > 0 only depends on M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) are satisfied, it suffices to prove that, for any ε > 0, there exists δ > 0 such that dE(ψ1 0, ψ2 0) < δ implies ∥|ψ1| − |ψ2|∥L∞([0,T2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) only yields ∥|ψ1| − |ψ2|∥L2([0,T2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) < Cδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall that ψi(t) = e i 2 t∆ψi 0 + Φ(ψi), where e i 2 t∆ψi 0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) and Φ(ψi) ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R2)) for i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, ZT (e i 2 t∆ψi 0) ≤ 2 √ 2 � E(ψ0 i ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14) that ∥|ψ1| − |ψ2|∥L∞ t L2x ≤ C � 1 + � E(ψ1 0) + � E(ψ2 0) + ∥Φ(ψ1)∥L∞ t H1x + ∥Φ(ψ2)∥L∞ t H1x � × � dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ t H1x � ≤ C(1 + 2 √ R + δ + 2M + 2M 1+2α) � dE(ψ1 0, ψ2 0) + ∥Φ(ψ1) − Φ(ψ2)∥L∞ t H1 x � , where we used (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22) in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We are left to show that for all ε > 0 there exists δ > 0 such that dE(ψ∗ 0, ψ0) < δ yields ∥Φ(ψ1) − Φ(ψ2)∥L∞ t H1x < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 29 The statement follows by combining (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13), and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) and observing that ∥∇Φ(ψ1) − ∇Φ(ψ2)∥L∞ t L2x ≤ ∥∇ψ1 − ∇ψ2∥L∞ t L2x + sup t∈[0,T2] dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) followed by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This completes Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 4: Note that Step 3 yields continuous dependence on the initial data w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' to the topology of E induced by the metric dE on a time interval [0, T2] where T2 only depends on M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One may hence cover [0, T ] by the union of intervals [tk, tk+1] with tk = kT2 for k ∈ {0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', N −1} with N = ⌈ T T2 ⌉ finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For all ε > 0, there exists δN > 0 such that dE(ψ1(tN−1), ψ2(tN−1)) < δN yields supt∈[tN−1,T ] dE(ψ1(t), ψ2(t)) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Next, there exists δN−1 > 0 such that dE(ψ1(tN−2), ψ2(tN−2)) < δN−1 yields supt∈[tN−2,tN−1] dE(ψ1(t), ψ2(t)) < δN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One may then iterate the scheme finitely many times in order to recover δ = δ1 > 0 such that dE(ψ0 1, ψ0 2) < δ implies supt∈[0,T ] dE(ψ1(t), ψ2(t)) < ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to show that for Or = {ψ0 ∈ E(R2) : dE(ψ∗ 0, ψ0) < r} it holds lim inf r→0 T ∗(Or) ≥ T ∗(ψ∗ 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This property is an immediate consequence of Step 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 is complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ We proceed to show a persistence of regularity property for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) under the gen- eral Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Subsequently, we prove the conservation of the Hamiltonian energy H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be as in Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and ψ0 ∈ E(R2) such that ∆ψ0 ∈ L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, the unique maximal solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) satisfies ∆ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)), ∂tψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Furthermore, the Hamiltonian is conserved, namely H(ψ(t)) = H(ψ0), for all t ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ0 ∈ E(R2) such that ∆ψ0 ∈ L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 provides a T ∗ > 0 such that there exists a unique maximal strong solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ(0) = ψ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The blow-up alternative yields that for any T ∈ [0, T ∗) there exists M > 0 such that ZT ≤ M, defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we show that there exists T1 ∈ (0, T ] only depending on ZT (ψ) such that ∂tψ ∈ C([0, T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Exploiting that ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) we obtain i∂tψ(0) = −1 2∆ψ0 + N(ψ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We claim that ∂tψ(0) ∈ L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We note that ∆ψ0 ∈ L2(R2) by assumption yields ψ0 ∈ X2 + H2(R2) ⊂ X2(R2) ⊂ L∞(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) that ∥N(ψ0)∥L2(R2) ≤ C �� E(ψ0) + E(ψ0) 1 2 +α� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By differentiating the Duhamel formula (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) in time and applying Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13 one has ∂tψ(t) = e i 2 t∆ � i 2∆ψ(0) − iN(ψ)(0) � − i � t 0 e i 2 s∆∂tN(ψ)(t − s)ds 30 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI = e i 2 t∆(∂tψ(0)) + � t 0 e i 2 (t−s)∆ � G1(ψ)∂tψ + G2(ψ)∂tψ � (s)ds, where G1, G2 are as defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='37).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, ∥∂tψ∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ∥∂tψ(0)∥L2(R2)+∥G1(ψ)∂tψ+G2(ψ)∂tψ(∂tψ)∥N 0([0,T ]×R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Upon exploiting the estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='39) on G1, G2 and following the lines of the proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, we conclude that ��G1(ψ)∂tψ + G2(ψ)∂tψ �� N 0([0,T ]×R2) ≤ C∥G∞(ψ)|∂tψ|∥L1 tL2x + ∥Gq(ψ)|∂tψ|∥N 0 ≤ C∥∂tψ∥L1 tL2x+∥ � 1 + |ψ|2α� |∂tψ|∥N 0 ≤ CT ∥∂tψ∥L∞ t L2x+T 1 q′ 1 ZT (ψ)2α∥∂tψ∥L∞ t L2x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Thus, there exists 0 < T1 < T only depending on ZT (ψ) such that � T1 + T 1 q′ 1 � � 1 + ZT (ψ)2α� < 1 2, and ∥∂tψ∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ 2∥∂tψ(0)∥L2(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, we deduce a space-time bound for ∆ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, ∥∆ψ∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ∥∂tψ∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) + ∥N(ψ)∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ ∥∂tψ∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) + � T1 + T 1 q′ 1 1 � � ZT (ψ) + ZT (ψ)2α� , by virtue of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As ∂tψ ∈ C([0, T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)) it then follows ∆ψ ∈ C([0, T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Third, we show that H(ψ(t)) = H(ψ0) for all t ∈ [0, T1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, we compute the L2-scalar product of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with ∂tψ and take the real part to infer 0 = Re ⟨i∂tψ, ∂tψ⟩ = Re � −1 2∆ψ + N(ψ), ∂tψ � , for any t ∈ [0, T1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We notice that all terms are well-defined andconclude that for all t ∈ [0, T1] the Hamiltonian energy is conserved, namely 0 = d dt � Rd 1 2|∇ψ|2 + F(|ψ|2)dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As T1 > 0 only depends on ZT(ψ), the procedure above may be implemented starting from any t0 ∈ [0, T − T1] covering the time interval [0, T ] by finitely many sub-intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows that H(ψ) is constant in time on each of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Since ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)), by continuity one concludes that H(ψ)(t) = H(ψ0) for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ The results of this Section then yield the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 in 2D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For d = 2, the first three statements follow from Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, while the fourth and fifth are provided by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 31 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assuming the nonlinear potential in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) to be non-negative, we show that the Cauchy problem associated to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is globally well- posed in the space E(R2) which completes the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we show that the regular solutions provided by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 are global.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Under the same assumptions of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5, let in addition the nonlinear potential F, defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) be non-negative, namely F ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, the solution constructed in Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 is global, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' T ∗ = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ ∈ C(0, T ∗;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) denote the unique maximal solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ(0) = ψ0 ∈ E(R2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Since H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗) it follows from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8 that there exists an increasing function g : (0, ∞) → (0, ∞) with lim r→0 g(r) = 0 such that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='28) E(ψ)(t) ≤ g (H(ψ)(t)) = g (H(ψ)(0)) = g (H(ψ0)) < +∞ for all t ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The blow-up alternative then yields that T ∗ = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In addition, ψ enjoys the bounds ∂tψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)) and ∆ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R2)) for any T > 0 as well as H(ψ(t)) = H(ψ0) for all t ∈ [0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Second, we prove Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, by exploiting continuous dependence on the initial data we show that the Hamiltonian energy is conserved for solutions in the energy space and deduce global existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that to complete the proof of the theorem it suffices to prove that the Hamiltonian energy is conserved for all solutions ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global existence then follows by arguing as in the proof of Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, given initial data ψ0 ∈ E(R3) and the unique solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R2)) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) such that ψ(0) = ψ0, we observe that thanks to Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10 there exists {ψn 0 } ⊂ E(R2) ∩ C∞(R2) such that ∆ψn 0 ∈ L2(R2) and dE(ψ0, ψn 0 ) converges to 0 as n goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 provides a sequence of unique global solutions ψn ∈ C(R, E(R2)) such that H(ψn)(t) = H(ψn 0 ) for all n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Relying on the contin- uous dependence on the initial data, we conclude that for any 0 < T < T ∗ one has sup t∈[0,T ] dE(ψ(t), ψn(t)) → 0 as n → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, E(ψn)(t) → E(ψ(t)) for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Similarly, conservation of the Hamil- tonian energy H(ψ) follows from H(ψn)(t) → H(ψ)(t) for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particu- lar, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8 yields an increasing function g : (0, ∞) → (0, ∞) with lim r→0 g(r) = 0 such that E(ψ)(t) ≤ 2E(ψn)(t) ≤ 2g (H(ψn)(t)) = 2g (H(ψn 0 )) ≤ C, for all t ∈ [0, T ] and n sufficiently large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By means of the blow-up alternative we conclude that the solution is global, namely ψ ∈ C(R, E(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 3D well-posedness The approach to prove well-posedness for d = 3 differs from the one for d = 2 in two aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we need to exploit that the nonlinear flow belongs to the full range of Strichartz spaces S1([0, T ] × R3)), defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, exploiting also (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29) we use that ∇ψ ∈ Lq([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lr(R3)) for some r > 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For 32 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI d = 3, it is not sufficient to work in L2-based function spaces - at least for super- cubic nonlinearities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 yields an affine structure for the energy space E(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This allows for several simplifications of the well-posedness proofs compared to Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In this section, let (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) (q, r) = �4(α + 1) 3α , 2(α + 1) � and note that (q, r) is Schr¨odinger admissible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall that the Strichartz spaces N 0 and N 1 are defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='27) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='28) respectively and the quantity ZT (ψ) in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 3 and f be such that Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, (1) for any ψ0 ∈ E(R3) there exists a maximal existence time T ∗ = T ∗(ψ0) > 0 and a unique maximal solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The blow-up alternative holds, namely if T ∗ < ∞ then lim tրT ∗ E(ψ)(t) = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2) for any 0 < T < T ∗(ψ0) it holds ψ − ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R3)), ∇ψ ∈ S0([0, T ] × R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, the nonlinear flow satisfies ψ(t) − e i 2 t∆ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R3)) ∩ S1([0, T ] × R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (3) the solution depends continuously on the initial data, namely if {ψn 0 }n∈N ⊂ E(R3) is such that dE(ψn 0 , ψ0) → 0 then for any 0 < T < T ∗(ψ0) it holds that supt∈[0,T ∗) dE(ψn(t), ψ(t)) → 0, where ψn denotes the unique local so- lution such that ψn(0) = ψn 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The affine structure of the energy space, see Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 allows one to reduce the wellposedness of Cauchy Problem for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) to the wellposedness of an affine problem in Fc(R3), see Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, we only exploit this property for the proof of the continuous dependence on the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that due to the affine structure it suffices to show sequential continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To show existence of a local strong solution ψ, it suffices to implement a fixed-point argument for the map (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) Φ(u)(t) = i � t 0 e i 2 (t−s) ∆N(ei s 2 ∆ψ0 + u(s))ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, if u ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R3)) is a fixed-point of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) then ψ(t) = e i 2 t∆ψ0 + u(t) is such that ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) due to Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and ψ is a local strong solution of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local existence Fixed (q, r) as in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), we implement a fixed-point argument for (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) in XT = {u ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R3)) ∩ Lq([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' W 1,r(R3)), u(0) = 0, ∥u∥XT ≤ M} with ∥ · ∥XT = ∥ · ∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='H1(R3)) + ∥ · ∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='W 1,r(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Equipped with the distance function dX(u, v) = ∥u − v∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥u − v∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)), WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 33 the space (XT , d) is a complete metric space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ0 ∈ E(R3) with E(ψ0) ≤ R, where M > 0 and 0 < T ≤ 1 are to be fixed later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we verify that Φ : XT → XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, we recall that for T = T (R) > 0 sufficiently small ZT (e i 2 t∆ψ0 + u) ≤ ZT (e i 2 t∆) + ∥u∥H1(R2) ≤ 2 � 2E(ψ0) + M ≤ 2 √ 2R + M, where ZT is defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='23) have been applied in the first and second inequality respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) that ∥Φ(u)(t)∥L∞ t L2x + ∥Φ(u)(t)∥Lq tLr x ≤ 2∥N(e i 2 t∆ψ0 + u)∥N 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Defining N1, N2 as in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='30) and exploiting the pointwise bounds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32), we infer ���N1(e i 2 t∆ψ0 + u) ��� L1 tL2x ≤ CT ZT(e i 2 t∆ψ0 + u) ≤ CT � 2 √ 2R + M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Next, using again (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) and the Chebychev inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) one has ���N2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='∞(e i 2 t∆ψ0 + u) ��� L1 tL2x ≤ CT L3 � supp(1 − η(e i 2 t∆ψ0 + u)) � 1 2 ≤ CT � 2 √ 2R + M � and ���N2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q(e i 2 t∆ψ0 + u) ��� L1 tL2x+Lq′ t Lr′ x ≤ ���(1 + |e i 2 t∆ψ0 + u|2α)|e i 2 t∆ψ0 + u|(1 − χ(e i 2 t∆ψ0 + u)) ��� L1 tL2x+Lq′ t Lr′ x ≤ CT (2 √ 2R + M) + ���� ��� � e i 2 t∆ψ0 + u � (1 − χ(e i 2 t∆ψ0 + u)) ��� 2α+1���� Lq′ t Lr′ x ≤ CT (2 √ 2R + M) + CT q−q′ qq′ ���(e i 2 t∆ψ0 + u)q ��� 2α L∞Lr ���(e i 2 t∆ψ0 + u)q ��� Lq t Lrx ≤ C � T + T q−q′ qq′ � 2 √ 2R + M �2α� � 2 √ 2R + M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, see also (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='33), imply the bound |∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ|, which allows one to infer that ���∇N1(e i 2 t∆ψ0 + u) + ∇N2,∞(e i 2 t∆ψ0 + u) ��� L1 tL2x ≤ CT � ∥∇ψ0∥L∞ t L2 x + ∥∇u∥L∞ t L2 x � ≤ CT � 2 √ 2R + M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To control ∇N2,q, note that e i 2 t∆∇ψ0 ∈ Lq([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lr(R3)) for any admissible pair (q, r) from Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14 and E(ψ0) ≤ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, ∥∇N2,q(e i 2 t∆ψ0 + u)∥L1 tL2x+Lq′ t Lr′ x ≤ CT (∥∇ψ0∥L2 + ∥u∥XT ) + C ����|(e i 2 t∆ψ0 + u)q|2α∇e i 2 t∆ψ0 ��� Lq′ t Lr′ x + ���|(e i 2 t∆ψ0 + u)q|2α∇u ��� Lq′ t Lr′ x � ≤ CT (2 √ 2R + M) + CT q−q′ qq′ (2 √ 2R + M)2α � ∥∇ψ0∥L2x + ∥∇u∥Lq tLrx � ≤ C � T + T q−q′ qq′ (2 √ 2R + M)2α � � 2 √ 2R + M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 34 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Finally, ∥Φ(u)∥XT ≤ C � T + T q−q′ qq′ (2 √ 2R + M)2α � � 2 √ 2R + M � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We proceed to show that Φ defines a contraction on XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let ψ0 ∈ E(R3) such that E(ψ0) ≤ R and u, v ∈ XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, dX (Φ(u), Φ(v)) ≤ ���N � e i 2 t∆ψ0 + u � − N � e i 2 t∆ψ0 + v ���� N 0 Inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='35) implies that ���N1 � e i 2 t∆ψ0 + u � − N1 � e i 2 t∆ψ0 + v ���� L1 tL2 x ≤ CT dX(u, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' and ���N2,∞ � e i 2 t∆ψ0 + u � − N2,∞ � e i 2 t∆ψ0 + v ���� L1 tL2x ≤ CT dX(u, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Again inequality (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='35) allows us to control the remaining term as follows ���N2,q � e i 2 t∆ψ0 + u � − N2,q � e i 2 t∆ψ0 + v ���� L1L2+Lr′ t Lq′ x ≤ CT ∥u − v∥L∞ t L2x + CT q−q′ qq′ � ZT � e i 2 t∆ψ0 + u �2α + ZT � e i 2 t∆ψ0 + v �2α� ∥u − v∥Lq t Lrx ≤ C � T + T q−q′ qq′ (2 √ 2R + M)2α � dX(u, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, dX ((Φ(u), Φ(v)) ≤ C � T + T q−q′ qq′ (2 √ 2R + M)2α � dX(u, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, it suffices to set M = √ R and to choose T = T (M) > 0 sufficiently small in order to conclude that Φ : XT → XT and Φ defines a contraction on XT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The Banach fixed-point Theorem yields a unique solution u ∈ XT to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, ψ(t) = e i 2 t∆ψ0 + u(t) solves (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Uniqueness For R > 0 fixed, let ψ0 ∈ E(R3) with E(ψ0) ≤ R and ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) two solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) such that ψ1(0) = ψ2(0) = ψ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We note that ψ1 − ψ2 ∈ S1([0, T ] × R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, from the Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and arguing as for the local existence we obtain that dX(ψ1, ψ2) ≤ ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R3) ≤ C � T + T q−q′ qq′ (ZT (ψ1)2α + ZT (ψ2)2α � dX(ψ1, ψ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Thus, there exists T1 > 0 sufficiently small such that ψ1 = ψ2 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T1] × R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As T1 only depends on ZT (ψi) with i = 1, 2 one may iterate the argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This yields uniqueness in C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Blow up alternative The proof of the blow-up alternative follows verbatim the proof of the respective statement for d = 2, see Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and is omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Membership in Strichartz spaces Statement (2) of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 follows directly from the local existence argument and the properties of the free solution, see (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 35 The proof of the continuous dependence on the initial data requires some pre- liminary properties and is postponed after Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ In view of the equivalent characterisation of the energy space E(R3) provided by Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, the well-posedness for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) can be reduced to the well-posedness of the following ”affine” problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Given ψ0 ∈ E(R3), let ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) be the unique max- imal solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists |c| = 1 and v ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fc) such that ψ(t) = c+ v(t) for all t ∈ [0, T ∗) and where v is solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) i∂tv = −1 2∆v + f(|c + v|2)(c + v), v(0) = v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The unique maximal solution exists in virtue of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 yields the decomposition ψ(t) = c(t) + v(t) for some |c(t)| = 1 and v(t) ∈ Fc for all t ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, c(0) = c and v(0) = v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It suffices to show that c(t) = c for all t ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' From (2) Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 we infer ψ − ψ0 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H1(R3)) for all 0 < T < T ∗, namely ψ(t) = c(t) + v(t) and ψ0 = c + v0 share the same far-field behavior for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows that c(t) = c for all t ∈ [0, T ] with 0 < T < T ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Given initial data ψ0 = c + v0, the solution ψ satisfies ψ = e i 2 t∆ψ0 + Φ(ψ) ∈ {c}+Fc(R3)+H1(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The connected component of E(R3) the solution ψ belongs to is determined by the constant c, see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, if ψ = c + v ∈ C([0, T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) such that v solves (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1), then �ψ = cψ = 1 + cv solves (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) and �v = cv solves (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) i∂t˜v = −1 2∆˜v + f(|1 + ˜v|2)(1 + ˜v), ˜v(0) = cv0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It therefore suffices to consider c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 reduces the well-posedness of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) in E(R3) to solving the affine problem (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) in Fc where the constant c is determined by the choice of the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, the continuous dependence on the initial data can be stated equivalently in terms of the metric (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) with the constants c1, c2 determined by the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If the nonlinearity is such that f satisfies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13), then it is convenient to im- plement the well-posedness result in homogeneous spaces by exploiting Strichartz estimates on the gradient, see also [22, Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5] for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) and [35, Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18] for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with nonlinearity (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, Assumption (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) ensures that ∇N is locally Lipschitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A suitable choice of the functional spaces for the local well-posedness is given by XT = C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fc(R3)) ∩ Lq([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ˙W 1,r(R3)), where the Strichartz admissible pair is for instance (q, r) = (10, 30 13), see [35, Propo- sition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, in the framework of Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, this is ruled out by the lack of regularity of the nonlinearity f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, for ∇N to be locally Lipschitz we require (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We proceed to the proof of continuous dependence on the initial data for which we exploit the decomposition of ψ given by Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 36 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f satisfy Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, T > 0, (q, r) as defined in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) and ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) such that ψi = ci + vi with ci ∈ C, |ci| = 1 and vi ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fc) for i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, there exists θ ∈ (0, 1] such that ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×R3) ≤ CT θ � 1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� × � |c1 − c2| + ∥v1 − v2∥L2 tL6x + ∥|ψ1| − |ψ2|∥L2 tL2 x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' First, we notice that for N1, N2 defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='30) it follows from the first inequality of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36) and the decomposition ψi = ci + vi provided by Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 that ∥N1(ψ1) − N1(ψ2)∥ L1 tL2x+L 4 3 t L 3 2 x ≤ ∥|c1 + v1| ||ψ1| − |ψ2||∥ L1 tL2 x+L 4 3 t L 3 2 x + ∥||ψ2| − 1| |c1 − c2 + v1 − v2||∥ L1 tL2 x+L 4 3 t L 3 2 x ≤ C � T 1 2 + T 1 4 ZT (ψ1) � ∥|ψ1| − |ψ2|∥L2 tL2x + CT 1 4 ZT (ψ2) |c1 − c2| + CT 1 2 ZT (ψ2))∥v1 − v2∥L2 tL6x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Second, we observe that L3(supp(N2(ψi))) ≤ ZT (ψi)2 for i = 1, 2 from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' From (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36), we conclude ∥N2,∞(ψ1) − N2,∞(ψ2)∥L1 tL2x ≤ CT (ZT (ψ1) + ZT (ψ2)) |c1 − c2| + CT 1 2 � ZT (ψ1) 2 3 + ZT (ψ2) 2 3 � ∥v1 − v2∥L2 tL6 x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Third, we show the desired bound for N2,q(ψ1) − N2,q(ψ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As |ψi| ≥ 3 2 on supp(N2,q(ψi)), it follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='36) that |N2,q(ψ1) − N2,q(ψ2)| ≤ C � 1 + |ψ1|2α + |ψ2|2α� |ψ1 − ψ2| ≤ C � |ψ1|β + |ψ2|β� |ψ1 − ψ2|, with β = max{2, 2α}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, it suffices to consider α ∈ [1, 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We observe that |N2,q(ψ1) − N2,q(ψ2)| ≤ C (1 + |ψ1,q|α + |ψ2,q|α) |ψ1 − ψ2|, see also (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Using again that L3(supp(N2(ψi))) ≤ ZT (ψi)2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' one recovers ∥N2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q(ψ1) − N2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q(ψ2)∥N 0 ≤ ∥ψ1 − ψ2∥L1 tL2x + ∥(|ψ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q|2α + |ψ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q|2α)|c1 − c2|∥ L 4 3 t L 3 2 x + ∥(|ψ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q|2α + |ψ2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='q|2α)|v1 − v2|∥ L 2 3−α t L 6 2α+1 x ≤ CT (ZT (ψ1) + ZT (ψ2)) |c1 − c2| + CT 1 2 � ZT (ψ1) 2 3 + ZT (ψ2) 2 3 � ∥v1 − v2∥L2 tL6x + C � ZT (ψ1)2α + ZT(ψ2)2α� T 3 4 |c1 − c2| + T 2−α 2 ∥v1 − v2∥L2 tL6x Combining the previous estimates,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' one concludes that there exists θ ∈ (0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1] such that ∥N(ψ1) − N(ψ2)∥N 0 ≤ CT θ � 1 + ZT (ψ1) + ZT (ψ2) + ZT (ψ1)2α + ZT (ψ2)2α� × � |c1 − c2| + ∥v1 − v2∥L2 tL6x + ∥|ψ1| − |ψ2|∥L2 tL2x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 37 We now prove continuous dependence on the initial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As in the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, we rely on a auxiliary metric to compensate for the lack of regular- ity of the nonlinearity f and to deal with the non-integrability of the wave-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' However, by virtue of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, it suffices to consider the affine problem (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This decomposition enables us to implement an argument in L2([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L6(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, it is sufficient to prove sequential continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 continued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let R > 0, ψ0 ∈ E(R3) with E(ψ0) ≤ R and ψn 0 ∈ E(R3) such that E(ψn 0 ) ≤ R and dE(ψ0, ψn 0 ) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, there exist complex constants |c| = 1, |cn| = 1 and v0, vn 0 ∈ Fc such that ψ0 = c + v0, ψn 0 = cn + vn 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows from the equivalence of metrics, see Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, that δ(c + v0, cn + vn 0 ) → 0, where δ is defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' There exists T = T (2E(ψ0) > 0 such that the unique solutions ψ, ψn ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) with initial data ψ0, ψn 0 respectively satisfy ZT (ψ) + ZT (ψn) ≤ M for sufficiently large n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 implies that there exist v, vn ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fc) such that ψ = c + v, ψn = cn + vn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof follows the same lines as the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We proceed in three steps corresponding to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 1: We show that there exists T1 = T1(M) > 0 such that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) ∥v − vn∥L2([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L6(R3)) + ∥|ψ| − |ψn|∥L2([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ Cδ(c + v0, cn + vn 0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the first contribution, we observe that ∥v − vn∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L6(R3)) = ���e i 2 t∆ψ0 − c + Φ(ψ) − e i 2 t∆ψn 0 + cn − Φ(ψn) ��� L2 tL6x ≤ ���e i 2 t∆(ψ0 − ψn 0 ) − (ψ0 − ψn 0 ) ��� L2 tL6 x + ∥v0 − vn 0 ∥L2 tL6 x + ∥N(ψ) − N(ψn)∥N 0 ≤ C(T + T 1 2 )δ(c + v0, cn + vn 0 ) + ∥N(ψ) − N(ψn)∥N 0 , where we used (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) in the second last inequality and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21) to control the difference of the free solutions in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, ���e i 2 t∆(ψ0 − ψn 0 ) − (ψ0 − ψn 0 ) ��� L2 tL6x ≤ T 1 2 ���e i 2 t∆(∇ψ0 − ∇ψn 0 ) − (∇ψ0 − ∇ψn 0 ) ��� L∞ t L2x ≤ CT ∥∇ψ0 − ∇ψn 0 ∥L2x ≤ CT δ(c + v0, cn + vn 0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To bound the second contribution in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5), we proceed as in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='21).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More pre- cisely, we observe that (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) remains valid upon replacing the admissible Strichartz pair (4, 4) for d = 2 by ( 8 3, 4) for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, the respective version of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='24) reads that there exists θ2 ∈ (0, 1] such that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6) ∥|ψ| − |ψn|∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ CT θ2 � 1 + M + M 1+2α� × � δ(c + v0, cn + vn 0 ) + ∥Φ(ψ) − Φ(ψn)∥S0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 38 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Summing up and applying the Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25), we conclude from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 that there exists C = C(M) > 0 and θ > 0 such that ∥v − vn∥L2([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L6(R3)) + ∥|ψn| − |ψ|∥L2([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ CMT θ × � δ(c + v0, cn + vn 0 ) + CMT θ � ∥v − vn∥L2 tL6x + ∥|ψn| − |ψ|∥L2 tL2x �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For T1 > 0 sufficiently small depending only on M, inequality (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) follows and Step 1 is complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 2 We show that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) implies that there exists T2 = T2(M) > 0 such that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) ∥∇v − ∇vn∥L∞([0,T2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥∇v − ∇vn∥Lq([0,T2];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)) → 0, as n → ∞ and where (q, r) as in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof follows closely the one of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) to which we refer for full details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In view of the Strichartz estimates of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14 it follows (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) ∥∇e i 2 t∆(c + v0) − ∇e i 2 t∆(c + v0)∥L∞ t L2x + ∥∇e i 2 t∆(c + v0) − ∇e i 2 t∆(c + v0)∥Lq tLrx ≤ C∥∇v0 − ∇vn 0 ∥L2x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To control the non-homogeneous term, we recall that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='34) yields |∇N(ψ)| ≤ C(1 + |ψ|2α)|∇ψ| ≤ C(1 + |ψq|2α)|∇ψ|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, for G∞, Gq defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='38) and upon applying (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25), we split the non-homogeneous term in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) ����i � t 0 e i 2 (t−s)∆ (∇N(ψ) − ∇N(ψn)) (s)ds ���� S0([0,T ]×R3) ≤ ∥(G∞)(ψ)|∇v − ∇vn|∥L1 tL2x + ∥Gq(ψ)|∇v − ∇vn|∥Lq′ t Lr′ x + ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 tL2x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ t Lr′ x ≤ CT ∥∇v − ∇vn∥L∞ t L2x + CT q−q qq′ ZT (ψ)2α∥∇v − ∇vn∥Lq tLrx + ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 tL2x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ t Lr′ x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Thus, for T2 > 0 sufficiently small so that C � T2 + T q−q′ qq′ 2 ZT (ψ)2α � ≤ 1 2, we conclude from (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8) and (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9) that ∥∇v − ∇vn∥L∞([0,T2],L2(R3)) + ∥∇v − ∇vn∥Lq([0,T2],Lr(R3)) ≤ Cδ(c + v0, cn + vn 0 ) + ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 tL2 x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ t Lr′ x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To conclude that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7) holds, it suffices to show that the second line of the right- hand side converges to 0 as n goes to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We proceed by contradiction assuming that there exists a subsequence still denoted ψn such that there exists ε > 0 such that for all n sufficiently large, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) ∥(G∞(ψ) − G∞(ψn)) |∇v|∥L1 tL2 x + ∥(Gq(ψ) − Gq(ψn)) |∇v|∥Lq′ t Lr′ x ≥ ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 39 Inequality (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5) implies that up to extracting a further subsequence, still denoted ψn, that ψn = cn + vn converges to ψ = c + v a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T ) × R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By virtue of the Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, one has that G∞, Gq are continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, |(G∞(ψ) − G∞(ψn))| |∇v| → 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' in [0, T ) × R3, |(Gq(ψ) − Gq(ψn))| |∇v| → 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' in [0, T ) × R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, ∥Gq(ψn)∥ L∞ t L 2(α+1) 2α x (R3) ≤ C∥|ψn|2α(1 − χ(ψn))∥ L∞ t L 2(α+1) 2α x (R3) ≤ L3 (supp(1 − χ(ψn))) α α+1 + ∥ψq,n∥2α L∞L2(α+1) ≤ C � ZT (ψn) 2α 1+α + ZT (ψn)2α� ≤ C(M 2α α+1 + M 2α), for all n ∈ N, where we exploited (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8), namely that the measure of supp(1−χ(ψn)) is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We obtain that there exists φ ∈ L∞([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(α+1)(R3)) such that |ψq,n| ≤ φ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' on [0, T ) × R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, we control |(G∞(ψ) − G∞(ψn))| |∇v| ≤ C|∇ψ| ∈ L1([0, T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R3)), |(Gq(ψ) − Gq(ψn))| |∇v| ≤ C � |ψ|2α + |φ|2α� |∇ψ| ∈ Lq′([0, T );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lr′(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The dominated convergence Theorem then implies that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10) is violated, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='17) follows and Step 2 is complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Step 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to show that (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='11) ∥|ψ| − |ψn|∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, we need to upgrade ∥|ψ| − |ψn|∥L2([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) → 0, so that the convergence holds for almost all times t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof follows closely the respective proof for d = 2, namely the proof of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We omit the details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Next, we show a persistence of regularity property and that the Hamiltonian energy H is conserved for regular solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Even though the proof is completely analogous to one for d = 2, except that here we can exploit the affine structure of the energy space E and Sobolev embeddings depend on the dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For the sake of clarity, we provide the proof of this lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let d = 3, f as in Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 and ψ0 ∈ E(R3) such that ∆ψ0 ∈ L2(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, the unique maximal solution ψ ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) satisfies ∆ψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R3)), ∂tψ ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R3)) for all T ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Moreover, H(ψ)(t) = H(ψ0) for all t ∈ [0, T ∗)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In view of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2, one has ψ(t) = c + v(t) for all t ∈ [0, T ∗) and it suffices to consider v ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fc(R3)) solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The assumption v0 ∈ Fc(R3) ∩ ˙H2(R3) yields that ∂tv(0) ∈ L2(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Indeed, by continuity in time one has i∂tv(0) = −1 2∆v(0) + N(c + v)(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As v(0) = v0 ∈ Fc(R3) ∩ ˙H2(R3) ⊂ L∞(R3) it follows that N1(c + v0) ∈ L2(R3) from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='32) and N2(c + v0) ∈ L∞(R3) and hence in L2(R3) by means of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By 40 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI differentiating the Duhamel formula in time and applying Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13, it follows that i∂tv(t) = e i 2 t∆ � i 2∆v(0) − iN(c + v)(0) � − i � t 0 e i 2 s∆∂t (N(c + v)(t − s)) ds = e i 2 t∆∂tv − i � t 0 e i 2 (t−s)∆ � G1(c + v)∂tv + G2(c + v)∂tv � (s)ds By means of the Strichartz estimates of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14, it follows for the admissible pair (q, r) as in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) and any 0 < T < T ∗ that ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥∂tv∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)) ≤ 2∥∂tv(0)∥L2(R3) + ��G1(c + v)∂tv + G2(c + v)∂tv �� N 0([0,T ]×R3) , with G1, G2 defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='37).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Upon splitting Gi in Gi,∞ and Gi,q, as in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='38), it follows that ∥Gi(c + v)|∂tv|∥N 0([0,T ]×R3) ≤ CT ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥|c + v|2α(1 − χ(c + v))|∂tv|∥N 0([0,T ]×R3) ≤ CT ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ��|(c + v)q|2α|∂tv �� Lq′ ([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr′(R3) ≤ CT ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + T q−q′ qq′ ZT (c + v)2α∥∂tv∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)) Therefore, ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥∂tv∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)) ≤ 2∥∂tv(0)∥L2(R3) + CT ∥∂tv∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + T q−q′ qq′ ZT (c + v)2α∥∂tv∥Lq([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For 0 < T1 < T ∗ sufficiently small, it holds ∥∂tv∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + ∥∂tv∥Lq([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='Lr(R3)) ≤ 4∥∂tv(0)∥L2(R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, ∥∆v∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ 2∥∂tv∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + 2∥N(c + v)∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ 2∥∂tv∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) + 4ZT (c + v) + ∥|(c + v)q)|2α+1∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) Note that |(c + v)q| ≥ 2 and |v| ≥ 1 on supp(1 − χ(c + v)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If α ∈ (0, 1], then ∥|(c + v)q|2α+1∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ C∥v∥1+2α L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L6(R3) ≤ CZT (c + v)1+2α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' If α ∈ (1, 2), then we apply the Gagliardo-Nirenberg inequality to obtain that ∥|(c + v)q|2α+1∥L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)) ≤ C∥v∥2−α L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L6(R3))∥∆v∥α−1 L∞([0,T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R3)), where we note that 0 < α − 1 < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows ∆v ∈ C([0, T1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally, we conclude that H(c + v)(t) = H(c + v0) by performing the analogue argument as in the proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 in 3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It only remains to show that the Hamiltonian energy is conserved for all solutions ψ ∈ C([0, T ∗), E(R3)) which follows from Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, approximation by smooth solutions by means of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10 together with Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 41 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global well-posedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Similar to the 2D case, the lack of a suitable notion of (renormalized) mass and the lack of sign-definiteness of the Hamiltonian energy H constitute the main obstacles for proving global well-posedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assuming that F ≥ 0 allows one to control the functional E(·), in terms of which the blow-up alternative in Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is stated, by H(·), see Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Global existence is proven following closely the method detailed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 for d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 be satisfied and in addition the nonlinear po- tential F, defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) be non-negative, namely F ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, the solution constructed in Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1 is global, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' T ∗ = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This proves Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5 for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Exploiting the affine structure of the en- ergy space E(R3), we also prove global well-posedness for a class of equations for which the associated nonlinear potential F(|ψ|2) fails to be non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' More precisely, we consider nonlinearities that are defocusing at leading order such as e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' competing power-type nonlinearities of the form f(r) = a1(rα1 − 1) − a2(rα2 − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' where a1, a2 > 0 and 0 < α2 < α1 < 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Such equations arise for instance in nonlinear optics to investigate self-focusing phenomena in a defocusing medium, see [5, 45, 54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We assume the defocusing nonlinearity to be dominant for large intensities |ψ|2 >> ρ0 and focusing phenomena to occur for small intensities |ψ|2 ≤ ρ0 where ρ0 is determined by the far-field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Upon a suitable scaling we may assume that ρ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f be a real-valued function satisfying Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2 and further of the form f(r) = a1(rα1 − 1) + g(r) with a1 > 0 and 0 < α1 < 2 and where g ∈ C0([0, ∞)) ∩ C1(0, ∞) is such that g(1) = 0 and |g(ρ)|, |ρg′(ρ)| ≤ C(1 + ρα2) for all ρ ≥ 0 and with with 0 ≤ α2 < α1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In addition, F(ρ) > 0 for all ρ > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Local well-posedness for (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) with f satisfying Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7 is provided by Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We recall from Lemma that any ψ ∈ E(R3) admits the decomposition ψ = c + v ∈ E(R3) with |c| = 1 and v ∈ Fc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In view of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4), it suffices to consider c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Following [43], for any ψ = 1 + v ∈ E(R3), we define M(ψ) = H(ψ) + C0 � R3 |Re(v)|2 dx, for a suitable C0 > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The functional M(·) is well-defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, M(ψ) allows one to control E(ψ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f satisfy Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7, v ∈ F1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' For all C0 > 0, there exists C1 = C1(E(1 + v)) > 0 such that M(1 + v) ≤ C1 (E(1 + v)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Furthermore, there exist C0, C2 > 0 such that E(1 + v) ≤ C2M(1 + v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 42 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To prove the first inequality, it suffices to observe that ∥Re(v)∥2 L2(R3) ≤ ��|v|2 + 2 Re(v) ��2 L2(R3) = ��|1 + v|2 − 1 ��2 L2(R3) ≤ 2EGL(1 + v), with EGL(1 + v) defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The claim then follows by arguing as in the proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To show the second inequality, it suffices to prove that E(1 + v) + C � R3 F−(|1 + v|2)dx ≤ C �1 2∥∇v∥2 L2(R3) + � R3 F+(|1 + v|2)dx + C0 ∥Re(v)∥2 L2(R3) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let δ ∈ (0, 1) be such that the expansion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) of F yields that ∥ (|1 + v| − 1) 1{||1+v|2−1|<δ}∥2 L2(R3) ≤ Cl � R3 F(|1 + v|2)1{||1+v|2−1|<δ}dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' for some Cl > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' On the other hand, there exists Ch > 0 such that � R3 ||1 + v| − 1|2 1{|1+v|2≥1+δ}dx ≤ C � R3 � |1 + v|2 − 1 � 1{|1+v|2≥1+δ}dx ≤ Ch � R3 F(|1 + v|2)1{|1+v|2≥1+δ}dx by Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let C := max{Cl, Ch}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that supp(F−(|1 + v|2)) ⊂ {|1+v|2 < 1−δ} and if |1+v|2 ≤ 1−δ, then necessarily Re(v) ∈ (−1− √ 1 − δ, −1+ √ 1 − δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, {|1 + v|2 < 1 − δ} ⊂ {| Re(v)| > η, with η := 1 − √ 1 − δ}, from which we conclude � R3 � ||1 + v| − 1|2 + CF−(|1 + v|2) � 1{|1+v|2≤1−δ}dx ≤ 1 + C η2 � R3 |Re(v)|2 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, there exists C0 > 0 such that the claim follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ While M(1+v)(t) is not conserved for solutions to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4), it enjoys an exponential bound in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let f satisfy Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7, v0 ∈ F1 and v ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' F1) be the unique maximal solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) with initial data v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then there exists C > 0 such that M(1 + v)(t) ≤ eCtC1(E(1 + v0)) for all t ∈ [0, T ∗), where C1 = C1(E(1+v0)) > 0 is as in Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, there exists C3 = C3 (E(1 + v0)) > 0 such that E(1 + v)(t) ≤ eCtC3 (E(1 + v0)) for all t ∈ [0, T ∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In a first step, let v0 ∈ F1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1 + v0 ∈ E(R3), such that ∆v0 ∈ L2(R3), then 1 + v ∈ C([0, T ∗);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(R3)) and ∆v ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L2(R3)) for all 0 < T < T ∗ by virtue of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows d dtM(ψ)(t) = C0 d dt � R3 |Re(v)|2 dx, WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 43 where we exploited that H(ψ)(t) = H(ψ0) for all t ∈ [0, T ] from (4) Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Therefore, d dt � R3 |Re(v)|2 dx = −2 � R3 Re(v) Im(∆v)dx + 2 � R3 f(|1 + v|2) Re(v) Im(1 + v)dx ≤ � R3 |∇v|2 dx + 2 � R3 f(|1 + v|2) Re(v) Im(v)dx, upon integrating by parts and Young’s inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The second term is bounded as 2 � R3 f(|1+v|2) Re(v) Im(1+v)dx = 2 � R3 f(|1+v|2) Im(v) Re(v)1{|1+v|2≤1−δ}dx + 2 � R3 f(|1 + v|2) Im(v) Re(v)1{||1+v|2−1|<δ}dx + 2 � R3 f(|1 + v|2) Im(v) Re(v)1{|1+v|2≥1+δ}dx =: I1 + I2 + I3, with δ ∈ (0, 1) to be chosen later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We dispose of the terms separately and note that if |1 + v|2 = |v|2 + 2 Re(v) + 1 < 1 − δ, then necessarily Re(v) ∈ (−1 − √ 1 − δ, −1 + √ 1 − δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hence, for η = 1 − √ 1 − δ we obtain |I1| ≤ C η2 � R3 |Re(v)|2 dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In order to bound I2, we rely on the expansion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='16) valid for all ρ ∈ (1 − δ, 1 + δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Upon using the local Lipschitz property of f and f(1) = 0, one has |I2| ≤ C � R3(|1 + v|2 − 1)21{||1+v|2−1|<δ}dx ≤ C � R3 F(|1 + v|2)1{||1+v|2−1|<δ}dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to control I3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In virtue of Assumption 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7, it holds F(ρ) > 0 for all ρ > 1 and there exist C > 0, R0 > 1 such that F(ρ) ≥ Cρ1+α1 for all ρ ≥ R0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It follows, |I3| ≤ CR1+α1 0 m � R3 F(|ψ|2)1{1+δ≤|ψ|2≤R0}dx + C � R3 F(|ψ|2)1{|ψ|2≥R0}dx, where m = min ρ∈[1+δ,R0] F(ρ) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' We conclude that there exists C > 0 such that d dtM(t) ≤ C � H(1 + v)(t) + � R3 F−(|1 + v|2)dx � + C η2 ∥ Re(v)∥2 L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, using that supp(F−) ⊂ {|1 + v|2 < 1 − δ} ⊂ {| Re(v)| > η}, we infer � R3 F−(|1 + v|2)dx ≤ C η2 ∥ Re(v)∥2 L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Finally and upon increasing C0 if necessary, there exist C > 0 such that d dtM(t) ≤ CM(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In virtue of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8 and Gronwall’s Lemma, one has M(1 + v)(t) ≤ eCtC1 � E(1 + v0) � , where C1 as given in Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The desired bound on E(1 + v)(t) then follows from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The statement follows for any initial data of finite energy by approximation, persistence of regularity and the continuous dependence on the initial data provided by Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='10 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ 44 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI Global existence then follows from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='9 and Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 by means of the blow-up alternative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, this completes the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lipschitz continuity of the solution map In this section, we provide the proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Namely, we show that provided f satisfies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) in addition to Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1, then the solution map is Lipschitz continuous on bounded sets of E(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Let R > 0 and ψ1 0, ψ2 0 ∈ E(Rd) such that E(ψi 0) ≤ R for i = 1, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Then, for all 0 < T < T ∗(OR) there exists M > 0 such that the unique maximal solutions ψ1, ψ2 ∈ C([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E(Rd)) satisfy ZT (ψ1) + ZT (ψ2) ≤ M, with ZT defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' By virtue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14), it follows that (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) dE(ψ1(t), ψ2(t)) ≤ C(1 + M)dE(e i 2 t∆ψ1 0, e i 2 t∆ψ2 0) + C(1 + M) ����−i � t 0 e i 2 (t−s)∆ (N(ψ1(s)) − N(ψ2(s))) ds ���� L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='H1(R3)) ≤ C(1 + M)dE(ψ1 0, ψ2 0) + C(1 + M) ∥N(ψ1) − N(ψ2)∥N 1([0,T ]×Rd) , where we used (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='22) to control the distance of the free solutions and the Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) to control the nonlinear flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 and Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4 for d = 2, 3 respectively yield that (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='2) ∥N(ψ1) − N(ψ2)∥N 0([0,T ]×Rd) ≤ C(1 + M + M 2α)T θ sup t∈[0,T ] dE(ψ1(t), ψ2(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' It remains to control ∇N(ψ1)− ∇N(ψ2) in N 0([0, T ]× Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To that end, we recall that ∇N(ψi) can be decomposed by means of the functions G∞(ψi), Gq(ψi) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='38).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' One has that (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3) ∥∇N(ψ1) − ∇N(ψ2)∥N 0([0,T ]×Rd) ≤ ∥|G∞(ψ1)||∇ψ1 − ∇ψ2|∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(Rd)) + ∥|Gq(ψ1)||∇ψ1 − ∇ψ2|∥N 0([0,T ]×Rd) + ∥|G∞(ψ1) − G∞(ψ2)||∇ψ2|∥N 0([0,T ]×Rd) + ∥|Gq(ψ1) − Gq(ψ2)||∇ψ2|∥N 0([0,T ]×Rd) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='39) yields that |G∞(ψ1)| ≤ C, |Gq(ψ1)| ≤ C(1 + |ψ1|2α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Further, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='13) yields that G∞, Gq are locally Lipschitz, namely, (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) |G∞(ψ1) − G∞(ψ2)| ≤ C ||ψ1| − |ψ2|| , |Gq(ψ1) − Gq(ψ2)| ≤ C � 1 + |ψ1|2β + |ψ2|2β� ||ψ1| − |ψ2|| , wit β = max{0, α − 1 2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' As |ψi| ≥ 1 on the support of Gq(ψi), we may assume in the following that β ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In the following, we distinguish to cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Case 1: d = 2: Let the admissible pair (q1, r1)) = ( 2(α+1) α , 2(α + 1)), see also (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To bound the first line on the right hand side of (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3), we observe that ∥|G∞(ψ1)||∇ψ1 − ∇ψ2|∥L1([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) ≤ CT ∥∇ψ1 − ∇ψ2∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)), WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 45 and ∥|Gq(ψ1)||∇ψ1 − ∇ψ2|∥N 0([0,T ]×R2) ≤ T 1 q′ 1 ZT (ψ1)2α∥∇ψ1 − ∇ψ2∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To bound the first term of the second line on the right hand side of (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3), one has ∥|G∞(ψ1) − G∞(ψ2)||∇ψ2|∥N 0([0,T ]×R2) ≤ C ∥|ψ1| − |ψ2||∇ψ2|∥L 4 3 ([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L 4 3 (R2)) ≤ T 1 2 ∥|ψ1| − |ψ2|∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2) ∥∇ψ2∥L4([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L4(R2)) ≤ T 1 2 � 1 + T + T 1 q′ 1 ZT (ψ1)2α � ZT (ψ) ∥|ψ1| − |ψ2|∥L∞([0,T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='L2(R2)) , where we used the Strichartz estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' To bound the second term of the line on the right hand side of (5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='3), we have that ∥|Gq(ψ1) − Gq(ψ2)||∇ψ2|∥N 0([0,T ]×R2) ≤ C ��� 1 + |ψ1,q|2β + |ψ2,q|2β� ||ψ1| − |ψ2|| ∇ψ2| �� N 0([0,T ]×R2) ≤ � T 1 2 ∥∇ψ∥L4L4 + T 1 3 � ∥|ψ1,q|2β + |ψ1,q|2β∥L∞ t L6 x � ∥∇ψ∥L3 tL6x � ∥|ψ1| − |ψ2|∥L∞ t L2 x ≤ � T 1 2 + T 1 3 � ZT(ψ1)2β + ZT (ψ2)2β�� � 1 + T + T 1 q′ 1 ZT (ψ1)2α � ZT (ψ) ∥|ψ1| − |ψ2|∥L∞ t L2 x where we used the Strichartz estimates (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='25) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='4) in the last inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Combining the above estimates, we obtain that there exists T1 = T1(M) > 0 sufficiently small so that dE (ψ1(t), ψ2(t)) ≤ C(1 + M)dE(ψ1 0, ψ2 0) for all t ∈ [0, T1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Note that T1 only depends on M, one may hence iterate the procedure N := ⌈ T T1 ⌉ times to cover the time interval [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' This completes the case d = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Case 2: d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' The proof for d = 3 follows the same lines upon modifying the space-time norms so that the pairs of exponents are Strichartz admissible for d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' In particular, one relies on the endpoint Strichartz estimate (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='29) to bound ∇ψ2 ∈ L2([0, T ];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L6(R3)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' □ If the solutions are global, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' T ∗(OR) = +∞, then Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7 extends to the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Under the Assumptions of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='7, if in addition f is such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='1) is globally well-posed then for any R > 0, T > 0, there exists C > 0 such that for all ψi 0 ∈ E(Rd), where i = 1, 2, with E(ψi) ≤ R the respective unique solutions ψi ∈ C(R, E(Rd)) satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Acknowledgements L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – SFB 1283/2 2021 – 317210226.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 46 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI References [1] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Antonelli, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hientzsch, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Marcati, On the Cauchy problem for the QHD system with infinite mass and energy: applications to quantum vortex dynamics, in prepara- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [2] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Antonelli, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hientzsch, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Marcati, On the low Mach number limit for quantum Navier-Stokes equations, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 52 (2020), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 6105–6139.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Antonelli, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hientzsch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Marcati, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zheng, On some results for quantum hydrodynamical models, in Mathematical Analysis in Fluid and Gas Dynamics, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kobayashi, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2070, RIMS Kˆokyˆuroku, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 107–129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [4] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Banica and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Miot, Global existence and collisions for symmetric configurations of nearly parallel vortex filaments, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Henri Poincar´e, Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Non Lin´eaire, 29 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 813–832.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [5] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Barashenkov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gocheva, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Makhankov, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Puzynin, Stability of the soliton-like “bubbles”, Physica D Nonlinear Phenomena, 34 (1989), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 240–254.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Berloff, Quantised vortices, travelling coherent structures and superfluid turbulence, in Stationary and time dependent Gross-Pitaevskii equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 473 of Contemp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Providence, RI, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 27–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [7] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' B´ethuel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gravejat, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Saut, Travelling waves for the Gross-Pitaevskii equa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' II, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 285 (2009), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 567–651.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [8] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Bethuel, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Orlandi, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Smets, Vortex rings for the Gross-Pitaevskii equation, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (JEMS), 6 (2004), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 17–94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Bethuel and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Saut, Travelling waves for the Gross-Pitaevskii equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' I, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Poincar´e Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Th´eor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 70 (1999), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 147–238.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [10] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Bethuel and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Smets, A remark on the Cauchy problem for the 2D Gross-Pitaevskii equation with nonzero degree at infinity, Differential Integral Equations, 20 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 325– 338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Carles and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ferriere, Logarithmic Gross-Pitaevskii equation, arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='14621, (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [12] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Carles and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sparber, On an intercritical log-modified nonlinear Schr¨odinger equation in two spatial dimensions, to appear in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Am.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Cazenave, Semilinear Schr¨odinger equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 10 of Courant Lecture Notes in Mathe- matics, New York University, Courant Institute of Mathematical Sciences, New York;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Amer- ican Mathematical Society, Providence, RI, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [14] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Chiron, Travelling waves for the Gross-Pitaevskii equation in dimension larger than two, Nonlinear Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 58 (2004), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 175–204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [15] , Stability and instability for subsonic traveling waves of the nonlinear Schr¨odinger equation in dimension one, Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' PDE, 6 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1327–1420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [16] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Chiron and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Maris¸, Traveling waves for nonlinear Schr¨odinger equations with nonzero conditions at infinity, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 226 (2017), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 143–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Colliander, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Keel, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Staffilani, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Takaoka, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Tao, Global well-posedness and scattering for the energy-critical nonlinear Schr¨odinger equation in R3, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2), 167 (2008), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 767–865.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' De Bouard, Instability of stationary bubbles, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 26 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 566– 582.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' de Laire, Non-existence for travelling waves with small energy for the Gross-Pitaevskii equation in dimension N ≥ 3, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Paris, 347 (2009), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 375–380.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [20] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gallo, Schr¨odinger group on Zhidkov spaces, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Differential Equations, 9 (2004), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 509–538.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [21] , The Cauchy problem for defocusing nonlinear Schr¨odinger equations with non- vanishing initial data at infinity, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Partial Differential Equations, 33 (2008), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 729– 771.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' G´erard, The Cauchy problem for the Gross-Pitaevskii equation, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Poincar´e Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Non Lin´eaire, 23 (2006), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 765–779.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [23] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' G´erard, The Gross-Pitaevskii equation in the energy space, in Stationary and time depen- dent Gross-Pitaevskii equations, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 473 of Contemp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Providence, RI, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 129–148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' WELL-POSEDNESS FOR NLS WITH NON-VANISHING CONDITIONS AT INFINITY 47 [24] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gialelis and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Stratis, Nonvanishing at spatial extremity solutions of the defocusing nonlinear Schr¨odinger equation, Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Methods Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 42 (2019), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 4939–4956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [25] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ginibre and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Velo, Scattering theory in the energy space for a class of nonlinear Schr¨odinger equations, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pures Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (9), 64 (1985), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 363–401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [26] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ginzburg and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pitaevski˘ı, On the theory of superfluidity, Soviet Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' JETP, 34 (7) (1958), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 858–861 (1240–1245 ˇZ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Eksper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Teoret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [27] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Grant and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Roberts, Motions in a Bose condensate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' the structure and effective masses of charged and uncharged impurities, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A: Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 7 (1974), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 260– 279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [28] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gravejat, A non-existence result for supersonic travelling waves in the Gross-Pitaevskii equation, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 243 (2003), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 93–103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [29] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gravejat, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pacherie, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Smets, On the stability of the Ginzburg-Landau vortex, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (3), 125 (2022), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1015–1065.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [30] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gross, Hydrodynamics of a superfluid condensate, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 4 (1963), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 195– 207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [31] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hani, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Nakanishi, Scattering for the 3D Gross-Pitaevskii equation, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 359 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 265–295.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [32] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gustafson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Nakanishi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Tsai, Scattering for the Gross-Pitaevskii equation, Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 13 (2006), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 273–285.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [33] , Global dispersive solutions for the Gross-Pitaevskii equation in two and three dimen- sions, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Henri Poincar´e, 8 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1303–1331.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [34] , Scattering theory for the Gross-Pitaevskii equation in three dimensions, Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Contemp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 11 (2009), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 657–707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [35] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Hientzsch, Nonlinear Schr¨odinger equations and quantum fluids non vanishing at infinity: incompressible limit and quantum vortices, PhD thesis, Gran Sasso Science Institute, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [36] , On the low mach number limit for 2d Navier-Stokes-Korteweg systems, Mathematics in Engineering, 5 (2023), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1–26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [37] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H¨ormander, The analysis of linear partial differential operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' I, Classics in Mathe- matics, Springer-Verlag, Berlin, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Distribution theory and Fourier analysis, Reprint of the second (1990) edition [Springer, Berlin;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MR1065993 (91m:35001a)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [38] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kato, On nonlinear Schr¨odinger equations, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Poincar´e Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Th´eor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 46 (1987), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 113–129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [39] , Nonlinear Schr¨odinger equations, in Schr¨odinger operators (Sønderborg, 1988), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 345 of Lecture Notes in Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Springer, Berlin, 1989, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 218–263.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [40] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Keel and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Tao, Endpoint Strichartz estimates, Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 120 (1998), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 955– 980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [41] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Killip, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Murphy, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Visan, The final-state problem for the cubic-quintic NLS with nonvanishing boundary conditions, Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' PDE, 9 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1523–1574.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [42] , The initial-value problem for the cubic-quintic NLS with nonvanishing boundary con- ditions, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 50 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 2681–2739.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [43] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Killip, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Oh, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pocovnicu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Vis¸an, Global well-posedness of the Gross- Pitaevskii and cubic-quintic nonlinear Schr¨odinger equations with non-vanishing boundary conditions, Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 19 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 969–986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [44] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kivshar, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anderson, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lisak, Modulational instabilities and dark solitons in a generalized nonlinear schr¨odinger equation, Physica Scripta, 47 (1993), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 679.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [45] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kivshar and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Luther-Davies, Dark optical solitons: physics and applications, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 298 (1998), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 81–97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [46] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Klein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Majda, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Damodaran, Simplified equations for the interaction of nearly parallel vortex filaments, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Fluid Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 288 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 201–248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [47] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Koch and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Liao, Conserved energies for the one dimensional Gross-Pitaevskii equation, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 377 (2021), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 107467, 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [48] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kuznetsov and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Rasmussen, Instability of two-dimensional solitons and vortices in defocusing media, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E, 51 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 4479–4484.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [49] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Lin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Wang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zeng, Stability of traveling waves of nonlinear Schr¨odinger equation with nonzero condition at infinity, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 222 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 143–212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [50] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Maris¸, Nonexistence of supersonic traveling waves for nonlinear Schr¨odinger equations with nonzero conditions at infinity, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 40 (2008), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1076–1103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 48 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' ANTONELLI, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' HIENTZSCH, AND P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' MARCATI [51] , Traveling waves for nonlinear Schr¨odinger equations with nonzero conditions at in- finity, Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' of Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' (2), 178 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 107–182.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [52] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Miyazaki, The derivation of the conservation law for defocusing nonlinear Schr¨odinger equations with non-vanishing initial data at infinity, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 417 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 580–600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pecher, Unconditional global well-posedness for the 3D Gross-Pitaevskii equation for data without finite energy, NoDEA Nonlinear Differential Equations Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 20 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1851– 1877.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [54] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pelinovsky, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Stepanyants, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Kivshar, Self-focusing of plane dark solitons in nonlinear defocusing media, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E (3), 51 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 5016–5026.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [55] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pitaevskii, Vortex lines in an imperfect Bose gas, Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' JETP, 13 (1961), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 451– 454.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [56] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Pitaevskii and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Stringari, Bose-Einstein condensation and superfluidity, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 164 of Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Monogr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Oxford: Oxford University Press, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [57] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sulem and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Sulem, The nonlinear Schr¨odinger equation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 139 of Applied Math- ematical Sciences, Springer-Verlag, New York, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Self-focusing and wave collapse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [58] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Tao and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Visan, Stability of energy-critical nonlinear Schr¨odinger equations in high dimensions, Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Differential Equations, (2005), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 118, 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [59] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Tao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Visan, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zhang, The nonlinear Schr¨odinger equation with combined power-type nonlinearities, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Partial Differential Equations, 32 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1281–1343.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [60] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Weinstein and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Xin, Dynamic stability of vortex solutions of Ginzburg-Landau and nonlinear Schr¨odinger equations, Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 180 (1996), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 389–428.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [61] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zhidkov, The Cauchy Problem for the nonlinear Schr¨odinger equation, Communica- tions of the Joint Institute for Nuclear Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Dubna, R5-87-373, Joint Inst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Nuclear Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', Dubna, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' With an English summary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [62] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zhidkov, On the solvability of Cauchy problem and stability of some solutions to the nonlinear Schr¨odinger equation, Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=', 1 (1989), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 155–160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' [63] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Zhidkov, Korteweg-de Vries and nonlinear Schr¨odinger equations: qualitative theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' 1756 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content=' Gran Sasso Science Institute, viale Francesco Crispi, 7, 67100 L’Aquila, Italy Email address: paolo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='antonelli@gssi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='it Universit¨at Bielefeld, Fakult¨at f¨ur Mathematik, Postfach 10 01 31, 33501 Bielefeld, Germany Email address: lhientzsch@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='uni-bielefeld.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='de Gran Sasso Science Institute, viale Francesco Crispi, 7, 67100 L’Aquila, Italy Email address: pierangelo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='marcati@gssi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} +page_content='it' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQf2fk_/content/2301.00751v1.pdf'} diff --git a/6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss b/6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..fd5928370a2326d028f29503c76d479dc96cd5ca --- /dev/null +++ b/6NE1T4oBgHgl3EQfTQM9/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b06ba99ee61d13cb563f0b926f6cd6ddef3381c3d8d7fbc56f8ccf3ab88639b2 +size 3604525 diff --git a/89AzT4oBgHgl3EQfgvxw/content/tmp_files/2301.01473v1.pdf.txt b/89AzT4oBgHgl3EQfgvxw/content/tmp_files/2301.01473v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..5cc9f9b038fa3c8d948cbeda829d925f6c1905b7 --- /dev/null +++ b/89AzT4oBgHgl3EQfgvxw/content/tmp_files/2301.01473v1.pdf.txt @@ -0,0 +1,1986 @@ +arXiv:2301.01473v1 [math.CO] 4 Jan 2023 +State Transfer in Complex Quantum Walks +Antonio Acuaviva1, Ada Chan2, Summer Eldridge3, Chris Godsil4, Matthew How-Chun-Lun5, +Christino Tamon6, Emily Wright7, and Xiaohong Zhang8 +1Department of Mathematics, Universidad Complutense de Madrid +2Department of Mathematics and Statistics, York University +3Department of Mathematics, University of Toronto +4Department of Combinatorics and Optimization, University of Waterloo +5Department of Mathematics, McMaster University +6Department of Computer Science, Clarkson University +7Department of Mathematics, Queen’s University +8Centre de recherches mathématiques, Université de Montréal +January 5, 2023 +Abstract +Given a graph with Hermitian adjacency matrix 퐻, perfectstate transfer occurs fromvertex 푎 to vertex +푏 if the (푏, 푎)-entryof the unitary matrix exp(−푖퐻푡) has unit magnitudefor some time 푡. This phenomenon +is relevant for information transmission in quantum spin networks and is known to be monogamous under +real symmetric matrices. We prove the following results: +• For oriented graphs (whose nonzero weights are ±푖), the oriented 3-cycle and the oriented edge +are the only graphs where perfect state transfer occurs between every pair of vertices. This settles +a conjecture of Cameron et al. [1]. On the other hand, we construct an infinite family of oriented +graphs with perfect state transfer between any pair of vertices on a subset of size four. +• There are infinite families of Hermitian graphs with one-way perfect state transfer, where perfect +state transfer occurs without periodicity. In contrast, perfect state transfer implies periodicity when- +ever the adjacency matrix has algebraic entries (see Godsil [2]). +• There are infinite families with non-monogamous pretty good state transfer in rooted graph prod- +ucts. In particular, we generalize known results on double stars (due to Fan and Godsil [3]) and +on paths with loops (due to Kempton, Lippner and Yau [4]). The latter extends the experimental +observation of quantum transport (made by Zimborás et al. [5]) and shows non-monogamouspretty +good state transfer can occur amongst distant vertices. +1 +Introduction +Given a graph 푋 = (푉 , 퐸) with adjacency matrix 퐴, a continuous-time quantum walk on 푋 is defined by the +time-dependent unitary matrix 푈(푡) = 푒−푖퐴푡. This natural quantum generalization of continuous-time ran- +dom walks is important for designing quantum algorithms. Childs et al. [6] showed that a continuous-time +quantum walk algorithm provides an exponential time speedup for an explicit search problem on graphs. +1 + +Subsequently, Childs [7] showed that continuous-time quantum walk is a universal model of quantum com- +putation. +Our focus in this paper is motivated by Bose [8] who studied quantum communication via continuous- +time quantum walk on graphs. We say that there is pretty good state transfer in a graph 푋 from vertex 푎 to +vertex 푏 if for any 휖 > 0, there is a time 푡 so that ‖‖푈(푡)푒푎 − 훾푒푏‖‖ ≤ 휖 where 훾 is a phase factor. Here, 푒푎 +denotes the unit vector with 1 at position 푎 and 0 elsewhere; similarly for 푒푏. If 휖 = 0 is achievable, we say +there is perfect state transfer in 푋 from 푎 to 푏 at time 푡. +Kay [9] proved a monogamy property for perfect state transfer on graphs with real symmetric adjacency +matrices: if there is perfect state transfer from 푎 to 푏 and from 푎 to 푐 then 푏 = 푐. In contrast, Cameron +et al. [1] showed that there are oriented graphs (whose adjacency matrices are Hermitian with ±푖 nonzero +entries) where state transfer occurs between every pair of vertices. This latter property is called universal +state transfer. Their primary examples are oriented cycles of prime order with universal pretty good state +transfer. A notable exception is the oriented 3-cycle which exhibits universal perfect state transfer. +It was conjectured in [1] that the oriented 퐾2 and 3-cycle are the only oriented graph with universal perfect +state transfer. We prove their conjecture in this work. This confirms that universal perfect state transfer is +an extremely rare phenomenon in oriented graphs. On the other hand, there are known infinite families of +graphs with universal perfect state transfer but with adjacency matrices that are Hermitian matrices with no +restriction on the entries (see Connelly et al. [10]). We call these Hermitian graphs. +Godsil and Lato [11] proved a strong characterization of perfect state tranfer in oriented graphs and +observed that perfect state transfer always implies periodicity (by the Gelfond-Schneider theorem). In fact, +Godsil [2] had observed the latter property holds for any adjacency matrix with algebraic entries. Our next +observation shows that the latter assumption is necessary to guarantee periodicity. We construct the first +infinite family of Hermitian graphs with one-way perfect state transfer, where perfect state transfer occurs +without periodicity. These examples also exhibit a one-time perfect state transfer property where perfect state +transfer occurs at a single unique time (and never to repeat again). +Godsil and Lato [11] also introduced a relaxation of universal perfect state transfer called multiple perfect +state transfer. We say a graph 푋 has multiple state transfer on a subset 푆 ⊂ 푉 (푋) of vertices, with |푆| ≥ 3, +if state transfer occurs between every pair of vertices of 푆. An explicit example of a 8-vertex circulant with +multiple perfect state tranfer was given in [11], but it was not clear if there are more examples sharing the +same properties. We construct the first infinite family of oriented graphs with multiple perfect state transfer +(which contains the aforementioned 8-vertex circulant as a special case). This shows that, unlike universal +perfect state transfer, multiple perfect state transfer is not an extremely rare phenomenon. +It is known that perfect state transfer is closed under the Cartesian graph product. In this work, under mild +assumptions, we show that multiple state transfer is closed under the rooted graph product (see Godsil and +McKay [12]). First, we prove a complete characterization of pretty good state transfer on the rooted product +of the oriented 3-cycle with stars 퐾1,푚. This generalizes a result of Fan and Godsil [3] on the double stars. +Next, we consider rooted product with single-looped paths instead of stars. Let 푋 be a 푛-vertex circulant +with universal perfect state transfer and let 푃 훾 +푚 be a 푚-vertex path with a self-loop of weight 훾 at one of its +endpoints. We prove that the rooted product 푋◦푃 훾 +푚 has multiple pretty good state transfer between every +pair of vertices with self-loop provided 훾 is transcendental. This generalizes a result of Kempton, Lippner +and Yau [4] and shows the power of loops to facilitate multiple state transfer among distant vertices. In the +special case when 푋 is the oriented 3-cycle, our result strengthens the experimental observations in Zimborás +et al. [5] (with the help of self-loops). +2 + +2 +Preliminary +Given a graph 푋 and an associated Hermitian matrix 퐻, the transition matrix of its continuous-time quantum +walk is +푈(푡) = 푒−i푡퐻. +We call 푋 a Hermitian graph if we do not assume any additional condition on the entries of 퐻. For the +special case where 푋 is an oriented graph, we use the Hermitian matrix 퐻 defined as +퐻푎,푏 = +⎧ +⎪ +⎨ +⎪⎩ +i +if there is an arc from 푎 to 푏 in 푋, +−i +if there is an arc from 푏 to 푎 in 푋, and +0 +if there is no arc between 푎 and 푏 in 푋. +Let 휃1, … , 휃푑 be distinct eigenvalues of 퐻. For 푟 = 1, … , 푑, let 퐸푟 denote the orthogonal projection +matrix onto the 휃푟-eigenspace of 퐻. Then 퐸푟퐸푠 = 훿푟,푠퐸푟 and ∑ +푟 퐸푟 = 퐼. The spectral decomposition +퐻 = ∑ +푟 휃푟퐸푟 gives +푈(푡) = +푑 +∑ +푟=1 +푒−i푡휃푟퐸푟. +Given a unit vector 푣 ∈ ℂ푛, the system with initial state 푣 evolves to 푈(푡)푣 = ∑ +푟 푒−푖푡휃푟퐸푟푣 at time 푡. Therefore +the pair (휃푟, 퐸푟) with 퐸푟푣 = 0 does not influence the state. We define the eigenvalue support of the vector +푣 to be Φ푣 = {휃푟 ∶ 퐸푟푣 ≠ 0}. In the case 푣 = 푒푎 for some vertex 푎, we also call Φ푒푎 (Φ푎 for short) the +eigenvalue support of 푎. +Perfect state transfer from vertex 푎 to vertex 푏 occurs at time 휏 if +푈(휏)푒푎 = 훼푒푏, +(1) +for some phase factor 훼. If 푎 = 푏 then we say the quantum walk is periodic at 푎. +Multiplying 퐸푟 to both sides of Equation (1) gives +푒−i휏휃푟퐸푟푒푎 = 훼퐸푟푒푏. +(2) +Hence, for 푟 = 1, … , 푑, there exists 푞푟(푎, 푏) ∈ [0, 2휋) such that +퐸푟푒푎 = 푒i푞푟(푎,푏)퐸푟푒푏. +(3) +We say the vertices 푎 and 푏 are strongly cospectral when this condition is satisfied, and call 푞푟(푎, 푏) the quarrel +from 푎 to 푏 relative to the eigenvalue 휃푟. Note that strongly cospectral vertices have the same eigenvalue +support. +We study perfect state transfer in oriented graphs and in Hermitian graphs in Sections 3 and 4. We give +here a characterization of perfect state transfer in Hermitian graphs. +Theorem 2.1. Perfect state transfer occurs from 푎 to 푏 in a Hermitian graph 푋 if and only if +i. 푎 and 푏 are strongly cospectral vertices with quarrels 푞푟(푎, 푏), for 휃푟 ∈ Φ푎, and +ii. for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁, there exist integers 푚푟,푠 and 푚ℎ,퓁 satisfying +휃푟 − 휃푠 +휃ℎ − 휃퓁 += +푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋 +푞ℎ(푎, 푏) − 푞퓁(푎, 푏) + 2푚ℎ,퓁휋 . +3 + +Proof. From Equation (3), we see that perfect state transfer from 푎 to 푏 implies they are strongly cospectral. +Suppose 푎 and 푏 are strongly cospectral with quarrel 푞푟(푎, 푏), for 휃푟 ∈ Φ푎(= Φ푏). Then Equation (1) +holds if and only if for 휃푟, 휃푠 ∈ Φ푎, +훼 = 푒i(푞푟(푎,푏)−휏휃푟) = 푒i(푞푠(푎,푏)−휏휃푠). +(4) +This is equivalent to +푒i휏(휃푟−휃푠) = 푒i(푞푟(푎,푏)−푞푠(푎,푏)) +and +휏 +( +휃푟 − 휃푠 +) += 푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋, +for some integer 푚푟,푠. Condition (ii) follows immediately. +We say the ratio condition on Φ푎 holds if +휃푟 − 휃푠 +휃ℎ − 휃퓁 +∈ Q +(5) +for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁. +Theorem 2.2. In a Hermitian graph 푋, 푎 is periodic if and only if the ratio condition on Φ푎 holds. +Proof. Note that 푞푟(푎, 푎) = 0 for 휃푟 ∈ Φ푎. The result follows immediately from Theorem 2.1. +In Section 5, we consider a relaxation of perfect state transfer. A graph has pretty good state transfer +from 푎 to 푏 if, for any 휀 > 0, there is a time 휏 satisfying +|푈(휏)푎,푏| ≥ 1 − 휀. +(6) +Using the proof of Lemma 13.1 in [13], we conclude that if there is pretty good state transfer from 푎 to 푏 +then 푎 and 푏 are strongly cospectral. From +푈(푡)푎,푏 = +푑 +∑ +푟=1 +푒−i푡휃푟푒푇 +푎 퐸푟푒푏 = +푑 +∑ +푟=1 +푒i(푞푟(푎,푏)−푡휃푟)(퐸푟)푏,푏, +we see that there is pretty good state transfer from 푎 to 푏 if and only if for any 휖 > 0, there exists 휏 > 0 and +훿휖 ∈ R such that +|휏휃푟 − 푞푟(푎, 푏) − 훿휖| < 휖 +(mod 2휋), +for 푟 ∈ Φ푎. +Theorem 2.3. (Kronecker [14]) Let 휃1, … , 휃푑 and 푞1, … , 푞푑 be arbitrary real numbers. For any 휖 > 0, the +system of inequalities +|휃푟휏 − 푞푟| < 휖 +(mod 2휋), +푟 = 1, … , 푑 +admits a solution for 휏 if and only if, for all set of integers 푙1, … , 푙푑, +푙1휃1 + … + 푙푑휃푑 = 0 +implies +푙1푞1 + … + 푙푑푞푑 = 0 +(mod 2휋). +4 + +Theorem 2.4. Let 푋 be Hermitian graph with eigenvalues 휃1, … , 휃푑 ∈ Φ푎. Then 푋 has pretty good state +transfer from 푎 to 푏 if and only if the following conditions hold. +i. The vertices 푎 and 푏 are strongly cospectral with quarrels 푞푟(푎, 푏), for 푟 = 1, … , 푑. +ii. There exists 훿 ∈ R such that, for all integers 푙1, … , 푙푑 satisfying ∑푑 +푟=1 푙푟휃푟 = 0, we have +푑 +∑ +푟=1 +푙푟 +( +푞푟(푎, 푏) + 훿 +) += 0 +(mod 2휋). +(7) +Proof. The result follows from Proposition 4.01 of [15] and Theorem 2.3. +Let 푆 be a set of vertices in 푋, we say multiple pretty good state transfer occurs on 푆 if there is pretty +good state transfer between any two vertices in 푆. Section 5 gives two families of Hermitian graphs that have +multiple pretty good state transfer. +3 +Perfect state transfer in oriented graphs +For graphs with real symmetric adjacency matrix, Kay shows that perfect state transfer cannot happen from +one vertex to two distinct vertices [9]. This monogamous behaviour does not hold in Hermitian graphs with +non-real entries. A graph has multiple perfect state transfer on a set 푆 of at least three vertices if there is +perfect state transfer between any two vertices in 푆. When 푆 = 푉 (푋), we say 푋 has universal perfect +state transfer. Lemma 22 of [10] gives a construction of Hermitian circulants that admit universal perfect +state transfer. The oriented 3-cycle is a special case of this construction. In the same paper, Cameron et +al. conjecture that the oriented 퐾2 and the oriented 퐾3 are the only oriented graphs that can have universal +perfect state transfer. We confirm this conjecture in Section 3.1. +In [11], Godsil and Lato investigated multiple perfect state transfer in oriented graph where 푆 is a proper +subset of 푉 (푋). They give an example of an oriented graph on eight vertices that admits multiple perfect +state transfer on a set of four vertices. In Section 3.2, we extend their example to an infinite family of oriented +graphs that have multiple perfect state transfer. +3.1 +Universal perfect state transfer +In [1], Cameron et al. show that the oriented 퐾2 and 퐾3 with any orientation admit universal perfect state +transfer. They give the following necessary conditions on the Hermitian graphs admitting universal perfect +state transfer. +Theorem 3.1. Let 퐻 be the matrix associated with a Hermitian graph 푋 that admits universal perfect state +transfer. Then the following holds: +1. All eigenvalues of 퐻 are simple. +2. If 푃 is a unitary matrix diagonalizing 퐻 then |푃푎,푏| = +1 +√ +푛, for 푎, 푏 ∈ 푉 (푋). +3. Every vertex in 푋 is periodic. +5 + +Suppose 푋 is an oriented graph on 푛 vertices that has universal perfect state transfer. Let 퐻 be its +associated Hermitian matrix with spectral decomposition +퐻 = +푛 +∑ +푟=1 +휃푟퐸푟. +Then 퐸푟 has rank one with constant diagonal entries 푛−1. We see that 퐻2 has constant diagonal entries and +the underlying (undirected) graph of 푋 is regular. Further, it follows from Theorem 6.1 of [11] that there +exists a positive square-free integer Δ such that 휃푟 ∈ Z +√ +Δ, for 푟 = 1, … , 푛. Hence +min +푟≠푠 |휃푟 − 휃푠| ≥ +√ +Δ. +(8) +We show in the following lemmas that an oriented graph with universal perfect state transfer can have at +most eleven vertices. +Lemma 3.2. Let 퐻 be a Hermitian matrix of order 푛 with zero diagonal entries. Let 휃1 ≤ 휃2 ≤ ⋯ ≤ 휃푛 be +the eigenvalues of 퐻. Then +푛 +∑ +푟,푠=1 +( +휃푟 − 휃푠 +)2 = 2푛 Tr(퐻2). +Proof. Observe that 휃푟 − 휃푠 is an eigenvalue of (퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻), for 푟, 푠 = 1 … , 푛. Hence +푛 +∑ +푟,푠=1 +( +휃푟 − 휃푠 +)2 = Tr +( +퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻 +)2 = Tr +( +퐻2 ⊗ 퐼푛 + 퐼푛 ⊗ 퐻2 − 2퐻 ⊗ 퐻 +) +. +The result follows from Tr(퐻 ⊗ 퐻) = 0. +Lemma 3.3. Let 푋 be an oriented graph on 푛 vertices and 푚 edges with eigenvalues 휃1 < ⋯ < 휃푛. Let +휎 = min푟≠푠 |휃푟 − 휃푠|. Then +휎2 푛(푛2 − 1) +24 +≤ 푚 +and +휎2 ≤ +12 +푛 + 1. +Proof. It follows from the definition of 휎 that 휎|푟 − 푠| ≤ |휃푟 − 휃푠|, and +휎2 +푛 +∑ +푟,푠=1 +(푟 − 푠)2 ≤ +푛 +∑ +푟,푠=1 +(휃푟 − 휃푠 +)2 . +The lower bound is +휎2 +푛 +∑ +푟,푠=1 +(푟 − 푠)2 = 휎2 +⎛ +⎜ +⎜⎝ +2푛 +푛 +∑ +푟=1 +푟2 − 2 +( 푛 +∑ +푟=1 +푟 +)2⎞ +⎟ +⎟⎠ += 휎2푛2(푛2 − 1) +6 +. +Applying Lemma 3.2 gives +휎2 푛2(푛2 − 1) +6 +≤ 2푛 Tr(퐻2) = 4푚푛. +The second inequality in the lemma follows immediately from 푚 ≤ +(푛 +2 +) +. +6 + +Corollary 3.4. Let 푋 be an oriented graph on 푛 vertices. If 푋 admits universal perfect state transfer then +푛 ≤ 11. Further, if 푛 ≥ 6 then 푋 has integral eigenvalues. +Proof. It follows from Equation (8) that 휎2 ≥ Δ ≥ 1. The second inequality of Lemma 3.3 gives 푛 ≤ 11. +When 푛 ≥ 6, we have 휎2 < 2 which implies Δ = 1 and the eigenvalues of 푋 are integers. +We are ready to rule out universal perfect state transfer in oriented graphs on more than three vertices. +Theorem 3.5. The oriented 퐾2 and 퐾3 are the only oriented graphs admitting universal perfect state transfer. +Proof. Suppose 푋 is an oriented graph on 푛 vertices that admits universal perfect state transfer. Then the +underlying graph of 푋 is 푘-regular, for some integer 푘. +Let 휃1 < ⋯ < 휃푛 be the eigenvalues of the Hermitian matrix 퐻 associated with 푋. Then 휃푟 ∈ Z +√ +Δ, +for some positive square-free integer Δ. Since i퐻 is a skew-symmetric matrix with entries ±1, we have +휃푟 = −휃푛+1−푟 +for 푟 = 1, … , 푛. +(9) +Further, the characteristic polynomial of i퐻 is equal to the characteristic polynomial of its underlying graph +over Z2. +When 푛 = 4 or 5, 퐶푛 and 퐾푛 are the only regular graphs on 푛 vertices. An exhaustive search rules out +oriented graphs on 4 or 5 vertices with spectrum satisying the above conditions. +For 푛 ≥ 6, it follows from Lemma 3.3 and Corollary 3.4 that 휎 = min푟≠푠 |휃푟 − 휃푠| = 1 and +푛2 − 1 +12 +≤ 푘 ≤ 푛 − 1. +Using this inequality together with the fact that 푘 is even when 푛 is odd, we narrow down to the following +possibilities. +푛 +6 +7 +8 +9 +10 +11 +푘 +3, 4, 5 +4, 6 +6, 7 +8 +9 +10 +Applying Equation (9) to Tr(퐻2) yields +푛푘 = 2 +⌊ 푛+1 +2 ⌋ +∑ +푟=1 +휃2 +푟 . +Direct computation returns integral solutions to this equation for only three cases: +푛 +푘 +underlying graph +Possible spectrum of i퐻 +11 +10 +퐾11 +0, ±i, ±2i, ±3i, ±4i, ±5i +7 +6 +퐾7 +0, ±i, ±2i, ±4i +7 +4 +퐶7 +0, ±i, ±2i, ±3i +It is straightforward to check that for each case, the characteristic polynomial of the underlying graph is not +equal to the polynomial with the roots listed in the table over Z2. +We conclude that there is no oriented graph on 푛 ≥ 4 vertices admitting universal perfect state transfer. +7 + +3.2 +Multiple perfect state transfer +In [11], Godsil and Lato relax the notion of universal perfect state transfer to multiple perfect state transfer +on a subset of vertices in oriented graphs. Let +퐻⃖⃗퐶4 = +⎡ +⎢ +⎢ +⎢⎣ +0 +−i +0 +i +i +0 +−i +0 +0 +i +0 +−i +−i +0 +i +0 +⎤ +⎥ +⎥ +⎥⎦ +be the Hermitian matrix of the directed 4-cycle. They show that the oriented graph with Hermitian matrix +[1 +0 +0 +1 +] +⊗ 퐻⃖⃗퐶4 + +[ 0 +i +−i +0 +] +⊗ 퐽4 +has multiple perfect state transfer on a set of four vertices. +Making use of this technical lemma from [16], we extend the above example to an infinite family of +oriented graphs where multiple perfect state transfer occur. +Lemma 3.6. Let 퐴 and 퐵 be Hermitian matrices where 퐴 has spectral decomposition 퐴 = ∑ +푟 휃푟퐸푟. Then +푒−푖푡(퐴⊗퐵) = +∑ +푟 +퐸푟 ⊗ 푒−푖푡휃푟퐵. +Lemma 3.7. Suppose 푋 is an oriented graph on 푛 vertices with associated Hermitian matrix 퐻푋, whose +eigenvalues are odd integers. Let 푌 be the oriented graph with Hermitian matrix +퐻푌 = 퐼푛 ⊗ 퐻⃖⃗퐶4 + 퐻푋 ⊗ 퐽4. +Then 푌 admits multiple perfect state transfer on the set {4ℎ+1, 4ℎ+2, 4ℎ+3, 4ℎ+4}, for ℎ = 0, 1, … , 푛−1. +Proof. Let 퐻푋 = ∑ +푟 휃푟퐸푟 be the spectral decomposition of 퐻푋. Since 퐼푛 ⊗ 퐻⃖⃗퐶4 and 퐻푋 ⊗ 퐽4 commute, +applying Lemma 3.6 gives +푒−i푡퐻푌 = +( +퐼푛 ⊗ 푒 +−i푡퐻⃗퐶4 +) ( +∑ +푟 +퐸푟 ⊗ 푒−i푡휃푟퐽4 +) += +∑ +푟 +퐸푟 ⊗ 푒 +−i푡 +( +퐻⃗퐶4 ++휃푟퐽4 +) +. +For odd integer 휃푟, we have +푒 +−i 휋 +4 +( +퐻⃗퐶4 ++휃푟퐽4 +) += +⎡ +⎢ +⎢ +⎢⎣ +0 +−1 +0 +0 +0 +0 +−1 +0 +0 +0 +0 +−1 +−1 +0 +0 +0 +⎤ +⎥ +⎥ +⎥⎦ +. +Hence +푒−i 휋 +4 퐻푌 = 퐼푛 ⊗ +⎡ +⎢ +⎢ +⎢⎣ +0 +−1 +0 +0 +0 +0 +−1 +0 +0 +0 +0 +−1 +−1 +0 +0 +0 +⎤ +⎥ +⎥ +⎥⎦ +, +and, for ℎ = 0, 1, … , 푛 − 1, the vertex 4ℎ + 1 has perfect state transfer to 4ℎ + 4, 4ℎ + 3 and 4ℎ + 2 at time +휋 +4, 휋 +2 and 3휋 +4 , respectively. +8 + +If 푋 is obtained by orienting all edges in the (2푚 + 1)-cube from one bipartition to the other bipartition, +then its associated matrix has the form +퐻푋 = +[ +0 +i퐵 +−i퐵푇 +0 +] +. +Then 퐻푋 has the same spectrum as the adjacency matrix of the (undirected) (2푚 + 1)-cube, which consists +of only odd integers. Lemma 3.7 gives an oriented graph admitting multiple perfect state transfer for integer +푚 ≥ 0. When 푚 = 0, then 푌 is the oriented graph given in [11]. +4 +Perfect state transfer in Hermitian graphs +We focus on Hermitian graphs with algebraic entries in the first part of this section. In particular, we study +the phase factors when perfect state transfer occurs in these graphs in Section 4.1. +Suppose 푋 is a Hermitian graph with algebraic entries. By Theorem 6.1 of [2] and Theorem 2.2, if +perfect state transfer from 푎 to 푏 occurs then the quantum walk on 푋 is periodic at both 푎 and 푏. Section 4.2 +gives examples of Hermitian graphs (with transcendental entries) in which perfect state transfer occurs from +푎 to 푏 but 푎 and 푏 are not periodic. +4.1 +Phase factor +We restrict our attention to Hermitian graphs with algebraic entries and extract information about the phase +factor when perfect state transfer occurs. +Let 퐻 be an algebraic Hermitian matrix. Its characteristic polynomial has algebraic coefficients. Given +spectral decomposition 퐻 = ∑ +푟 휃푟퐸푟, the eigenvalues 휃푟’s are algebraic so are the entries in 퐸푟. +Theorem 4.1. Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition +퐻 = ∑ +푟 휃푟퐸푟. If perfect state transfer occurs from 푎 to 푏 with phase factor 훼, then 훼 is algebraic if and only +if +휃푟 +휃푠 +∈ Q, +for 휃푟, 휃푠 ∈ Φ푎 such that 휃푠 ≠ 0. +Proof. Suppose perfect state transfer occurs from 푎 to 푏 at time 휏 with algebraic phase factor 훼. It follows +from Equation (2) that 푒−i휏휃푟 is algebraic, for 휃푟 ∈ Φ푎 = Φ푏. Applying the Gelfond-Schneider Theorem to +(푒−i휏휃푠) 휃푟 +휃푠 = 푒−i휏휃푟, +for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0, we conclude that 휃푟 +휃푠 is rational. +Now suppose 휃푠 +휃푟 ∈ Q for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0. Let 푞푟(푎, 푏) be the quarrels from 푎 to 푏 relative to +휃푟 ∈ Φ푎. It follows from Equation (3) that 푒i푞푟(푎,푏) is algebraic. Applying Equation (4) yields +훼 +( 휃푟 +휃푠 −1 +) += +( +푒i(푞푠(푎,푏)−휏휃푠)) 휃푟 +휃푠 푒i(휏휃푟−푞푟(푎,푏)) = +( +푒i푞푠(푎,푏)) 휃푟 +휃푠 푒−i푞푟(푎,푏). +The right-hand side is algebraic, so is 훼. +9 + +Theorem 4.2. Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition +퐻 = ∑ +푟 휃푟퐸푟. Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼. If there exist integers +푘푟’s satisfying +∑ +푟∈Φ푎 +푘푟휃푟 = 0 +and +∑ +푟∈Φ푎 +푘푟 ≠ 0 +then 훼 is algebraic. +Proof. From Equation (4), we have +훼 +∑ +푟∈Φ푎 푘푟 = 푒 +−i휏 +(∑ +푟∈Φ푎 푘푟휃푟 +) ∏ +푟∈Φ푎 +(푒i푞푟(푎,푏))푘푟 = +∏ +푟∈Φ푎 +(푒i푞푟(푎,푏))푘푟 . +Since the right-hand side is algebraic and ∑ +푟∈Φ푎 푘푟 ≠ 0, we conclude that 훼 is algebraic. +We apply the theorem to algebraic Hermitian graphs where Φ푎 contains all eigenvalues of 퐻. +Corollary 4.3. Let 퐻 be an algebraic matrix associated with a Hermitian graph with zero diagonal entries. +Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼. If 푎 has full eigenvalue support then 훼 +is algebraic. +Proof. Let 푘푟 be the multiplicity of 휃푟, for 휃푟 ∈ Φ푎. Since Φ푎 contains all eigenvalues of 퐻, we have +∑ +푟∈Φ푎 푘푟휃푟 = Tr(퐻) = 0 and ∑ +푟∈Φ푎 푘푟 equals the number of vertices. It follows from Theorem 4.2 that the +phase factor at perfect state transfer is algebraic. +Given spectral decomposition of an algebraic Hermitian matrix 퐻 = ∑ +푟 휃푟퐸푟, if 퐸푟 has constant diagonal +then every vertex has full eigenvalue support. In particular, Corollary 4.3 applies to +• the adjacency matrix of a walk regular graph, +• an algebraic Hermitian matrix with zero diagonal that belongs to a Bose-Mesner algebra, and +• Hermitian circulants with algebraic entries and zero diagonal. +4.2 +One-way perfect state transfer +We saw at the beginning of Section 4 that if perfect state transfer occurs from 푎 to 푏 in an algebraic Hermitian +graph then both 푎 and 푏 are periodic. In particular, there is perfect state transfer from 푏 back to 푎. +We give a family of Hermitian graphs, with transcendental entries, that have perfect state transfer from +푎 to 푏 but not periodic at 푎 nor 푏. In particular, they do not have perfect state transfer from 푏 to 푎. +Theorem 4.4. There exist infintely many Hermitian graphs which admit perfect state transfer from 푎 to 푏 but +are not periodic at 푎. +Proof. Let 휆 be any real number such that 휆 ∉ Q휋. Define matrices +푃 = 1 +2 +⎡ +⎢ +⎢ +⎢⎣ +1 +1 +1 +1 +1 +1 +−1 +−1 +1 +−1 +푒i휆 +−푒i휆 +1 +−1 +−푒i휆 +푒i휆 +⎤ +⎥ +⎥ +⎥⎦ +and +퐷 = +⎡ +⎢ +⎢ +⎢⎣ +0 +0 +0 +0 +0 +휋 +0 +0 +0 +0 +휆 +0 +0 +0 +0 +휆 + 휋 +⎤ +⎥ +⎥ +⎥⎦ +. +10 + +Consider the Hermitian matrix +퐻 ∶= 푃 퐷푃 −1 = +(휋 + 휆 +2 +) +퐼4 − +⎡ +⎢ +⎢ +⎢ +⎢⎣ +0 +휆 +2 +휋 +4(1 + 푒−푖휆) +휋 +4(1 − 푒−푖휆) +휆 +2 +0 +휋 +4(1 − 푒−푖휆) +휋 +4(1 + 푒−푖휆) +휋 +4 (1 + 푒푖휆) +휋 +4(1 − 푒푖휆) +0 +휆 +2 +휋 +4 (1 − 푒푖휆) +휋 +4(1 + 푒푖휆) +휆 +2 +0 +⎤ +⎥ +⎥ +⎥ +⎥⎦ +. +Let 휃1 = 0, 휃2 = 휋, 휃3 = 휆 and 휃4 = 휆 + 휋. All vertices have full eigenvalue support. Vertices 1 and +3 are strongly cospectral with quarrels: 푞1(3, 1) = 0, 푞2(3, 1) = 휋, 푞3(3, 1) = 휆, and 푞4(3, 1) = 휆 + 휋. By +Theorem 2.1, we have perfect state transfer from vertex 3 to 1 at time 휏 = 1 with phase factor 1. As 휆 is not +a rational multiple of 휋, we have +휃3 − 휃1 +휃2 − 휃1 += 휆 +휋 ∉ ℚ. +By Theorem 2.2, 퐻 is not periodic at vertex 1 nor at vertex 3. +Example 4.5. Consider the complex Hadamard matrix +푃 = +⎡ +⎢ +⎢ +⎢ +⎢ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +1 +1 +1 +1 +푖 +푖 +푖 +푖 +1 +−1 +푒푖휃 +−푒푖휃 +−1 +1 +−푒푖휃 +푒푖휃 +1 +1 +푒푖2휃 +푒푖2휃 +−푖 +−푖 +−푖푒푖2휃 +−푖푒푖2휃 +1 +−1 +푒푖3휃 +−푒푖3휃 +1 +−1 +푒푖3휃 +−푒푖3휃 +푖 +푖 +−푖 +−푖 +−1 +−1 +1 +1 +−푖 +푖 +푖푒푖휃 +−푖푒푖휃 +푖 +−푖 +−푖푒푖휃 +푖푒푖휃 +푖 +푖 +−푖푒푖2휃 +−푖푒푖2휃 +1 +1 +−푒푖2휃 +−푒푖2휃 +−푖 +푖 +푖푒푖3휃 +−푖푒푖3휃 +−푖 +푖 +푖푒푖3휃 +−푖푒푖3휃 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +and diagonal matrix 퐷 = diag +( +0, 휋, 휃, 휃 + 휋, 휋 +2, 3휋 +2 , 휃 + 휋 +2, 휃 + 3휋 +2 +) +. Then the Hermitian graph 푋 with +matrix 퐻 = 푃 퐷푃 −1 admit perfect state transfer from vertex 1 to 2 at 푡 = 1, from vertex 1 to 3 at 푡 = 2, from +vertex 1 to 4 at 푡 = 3. Each vertex has full eigenvalue support, and if 휃 ∉ Q휋, then the ratio condition is not +satisfied and 푋 is not periodic at any vertex. +5 +Multiple pretty good state transfer +Theorem 4.4 shows that it is possible to have one-way perfect state transfer in Hermitian graph. We now +show that pretty good state transfer in Hermitian graphs goes both ways. +Lemma 5.1. If a Hermitian graph admits pretty good state transfer from 푎 to 푏, then it has pretty good state +transfer from 푏 to 푎. +Proof. Suppose 푈(푡) is the transition matrix of a Hermitian graph that has pretty good state transfer from +푎 to 푏. Then, for 휀 > 0, there exists a time 휏1 such that 푈(휏1)푒푎 = 훾1푒푏 + 휌1, for some phase factor 훾1 and +vector 휌1 with ‖휌1‖ < 휀 +2. +As 푈(푡) is almost periodic, there exists 휏2 > 휏1 such that 푈(휏2)푒푎 = 훾2푒푎 + 휌2, for some phase factor 훾2 +and some vector 휌2 with ‖휌2‖ < 휀 +2. We have +푈(휏2 − 휏1)푒푏 = 훾1푈(휏2) +( +푒푎 − 푈(−휏1)휌1 +) += 훾1 +( +훾2푒푎 + 휌2 − 푈(휏2 − 휏1)휌1 +) +. +11 + +Hence +‖푈(휏2 − 휏1)푒푏 − 훾1훾2푒푎‖ = ‖휌2 − 푈(휏2 − 휏1)휌1‖ ≤ ‖휌1‖ + ‖휌2‖ < 휀 +and there is pretty good state transfer from 푏 to 푎. +In [5], Zimborás et al. assign a complex weight 푒i훽 to an edge in the following graph and use the weight +to control the fidelity at 푏 and 푐 with initial state 푒푎. +푒i훽 +푎 +푏 +푐 +This graph can be viewed as the rooted product of the weighted 퐾3 with a path. Given a graph 푋 on 푛 vertices +and a rooted graph 푌 with root 푎. The rooted product of 푋 and 푌 , 푋◦푌 , is obtained by taking 푛 isomorphic +copies of 푌 and identifying the 푗-th vertex of 푋 with the root of the 푗-th copy of 푌 . In this section, we give +two families of rooted products that have multiple pretty good state transfer. +5.1 +Oriented 3-cycle rooted with a star +In [3], Fan and Godsil show that the double star, the rooted product of 퐾2 and 퐾1,푚, has pretty good state +transfer between the two non-pendant vertices if and only if 4푚 + 1 is not a perfect square. Note that 퐾2 is +the only simple undirected graph with universal perfect state transfer. We extend their result to the rooted +product of the oriented 3-cycle ⃖⃖⃗ +퐾3 with ̂ +퐾1,푚, where ̂ +퐾1,푚 denotes the star 퐾1,푚 with the non-pendant vertex +being its root. +푐 +푎 +푏 +Lemma 5.2. Suppose 푎 and 푏 are strongly cospectral vertices in the Hermitian graph 푋 on 푛 ≥ 2 vertices. +Then they are strongly cospectral in the rooted product 푋◦ ̂ +퐾1,푚. +Proof. Let 퐻푋 be the Hermitian matrix associated with 푋 with spectral decomposition 퐻푋 = ∑푑 +푟=1 휃푟퐸푟 . +Then the matrix associated with the rooted product 푌 = 푋◦ ̂ +퐾1,푚 is +퐻푌 = +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +1 +0 +0 +⋯ +0 +0 +0 +0 +⋯ +0 +⋮ +⋮ +⋮ +⋱ +⋮ +0 +0 +0 +⋯ +0 +0 +0 +0 +⋯ +0 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +⊗ 퐻푋 + +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +0 +1 +1 +⋯ +1 +1 +0 +0 +⋯ +0 +⋮ +⋮ +⋮ +⋱ +⋮ +1 +0 +0 +⋯ +0 +1 +0 +0 +⋯ +0 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +⊗ 퐼푛. +12 + +For 푟 = 1, … , 푑, define +휆± +푟 = +휃푟 ± +√ +휃2 +푟 + 4푚 +2 +, +and +퐹 ± +푟 = +1 +(휆± +푟 )2 + 푚 +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +(휆± +푟 )2 +휆± +푟 +휆± +푟 +⋯ +휆± +푟 +휆± +푟 +1 +1 +⋯ +1 +⋮ +⋮ +⋮ +⋱ +⋮ +휆± +푟 +1 +1 +⋯ +1 +휆± +푟 +1 +1 +⋯ +1 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +⊗ 퐸푟. +Define +퐹0 = +⎡ +⎢ +⎢⎣ +0 +ퟎ푚 +ퟎ푚 +퐼푚 − 1 +푚퐽푚 +⎤ +⎥ +⎥⎦ +⊗ 퐼푛. +Then 퐻푌 has spectral decomposition +퐻푌 = 0 ⋅ 퐹0 + +푑 +∑ +푟=1 +(휆+ +푟 ⋅ 퐹 + +푟 + 휆− +푟 ⋅ 퐹 − +푟 +) . +(10) +Note that the (1, 1)-block are indexed by the vertices in 푋 and the eigenvalue 0 is not in the support of 푎 nor +푏. The result follows from the (1, 1)-block of 퐹 + +푟 and 퐹 − +푟 being non-zero scalar multiple of 퐸푟. +Corollary 5.3. Suppose 푋 is a Hermitian graph with universal perfect state transfer with spectrum Φ. Let +푆 be the set of non-pendant vertices in 푋◦ ̂ +퐾1,푚. Let +Ψ = +{ +휃 ± +√ +휃2 + 4푚 +2 +||| 휃 ∈ Φ +} +. +If Ψ is linearly independent over Q, then 푋◦ ̂ +퐾1,푚 has multiple pretty good state transfer on 푆. +Proof. For 푎, 푏 ∈ 푆, there is perfect state transfer between 푎 and 푏 in 푋, so 푎 and 푏 are strongly cospectral in +푋◦ ̂ +퐾1,푚 by Lemma 5.2. We see in Equation (10) that Ψ is the eigenvalue support of 푎 in the rooted product. +It follows from Theorem 2.4 that pretty good state transfer occurs between 푎 and 푏 in 푋◦ ̂ +퐾1,푚. +In the following result, we focus on 푋 = ⃖⃖⃗ +퐾3 which has spectral decomposition +⎡ +⎢ +⎢⎣ +0 +−i +i +i +0 +−i +−i +i +0 +⎤ +⎥ +⎥⎦ += 0 ⋅ 1 +3퐽3 + +√ +3 ⋅ 1 +3 +⎡ +⎢ +⎢⎣ +1 +푒−2휋i∕3 +푒2휋i∕3 +푒2휋i∕3 +1 +푒−2휋i∕3 +푒−2휋i∕3 +푒2휋i∕3 +1 +⎤ +⎥ +⎥⎦ +− +√ +3 ⋅ 1 +3 +⎡ +⎢ +⎢⎣ +1 +푒2휋i∕3 +푒−2휋i∕3 +푒−2휋i∕3 +1 +푒2휋i∕3 +푒2휋i∕3 +푒−2휋i∕3 +1 +⎤ +⎥ +⎥⎦ +. +Hence any two vertices in ⃖⃖⃗ +퐾3 are strongly cospectral. Let 푉 (⃖⃖⃗ +퐾3) = {푎, 푏, 푐}. Then the eigenvalue support +of 푎 in ⃖⃖⃗ +퐾3◦ ̂ +퐾1,푚 are 휆1 = +√ +푚, 휆2 = − +√ +푚, +휆3 = +√ +3 + +√ +3 + 4푚 +2 +, +휆4 = +√ +3 − +√ +3 + 4푚 +2 +, +휆5 = − +√ +3 + +√ +3 + 4푚 +2 +and +휆6 = − +√ +3 − +√ +3 + 4푚 +2 +. +13 + +From Equation (10), the quarrels in ⃖⃖⃗ +퐾3◦ ̂ +퐾1,푚 are +푞푟(푎, 푏) = +⎧ +⎪ +⎨ +⎪⎩ +0 +if 푟 = 1, 2, +2휋 +3 +if 푟 = 3, 4, and +−2휋 +3 +if 푟 = 5, 6. +Theorem 5.4. The rooted product ⃖⃖⃗ +퐾3◦ ̂ +퐾1,푚 admits multiple pretty good state transfer on the set {푎, 푏, 푐} of +non-pendant vertices if and only if one of the following holds. +1. gcd(3, 푚) = 1. +2. 푚 = 3푠, for some integer 푠 such that neither 푠 nor 4푠 + 1 are perfect square. +3. 푚 = 27푘2, for some integer 푘. +4. 푚 = 27푘2 + 27푘 + 6, for some integer 푘. +Proof. Since ⃖⃖⃗ +퐾3◦ ̂ +퐾1,푚 has an automorphism that maps 푎 to 푏, 푏 to 푐 and 푐 to 푎, it is sufficient to prove that +there is pretty good state transfer from 푎 to 푏 in the rooted product. +By Lemma 5.2, Condition (i) of Theorem 2.4 holds. For Condition (ii) of Theorem 2.4, we consider +integers 푙1, … , 푙6 satisfying +6 +∑ +푟=1 +푙푟휆푟 = +( +푙1 − 푙2 +) √ +푚 + +(푙3 + 푙4 − 푙5 − 푙6 +2 +) √ +3 + +(푙3 − 푙4 + 푙5 − 푙6 +2 +) √ +3 + 4푚 = 0. +(11) +Case 1: If gcd(3, 푚) = 1 then the set { +√ +3, +√ +푚, +√ +3 + 4푚} is linearly independent over Q. Equation (11) +implies (푙3 + 푙4 − 푙5 − 푙6)∕2 = 0 and +6 +∑ +푟=1 +푙푟푞푟(푎, 푏) = +( +푙3 + 푙4 − 푙5 − 푙6 +) 2휋 +3 = 0 +(mod 2휋). +(12) +Condition (ii) of Theorem 2.4 holds with 훿 = 0, so there is pretty good state transfer from 푎 to 푏 in +⃖⃖⃗ +퐾3◦ ̂ +퐾1,푚. +Case 2: When 푚 = 3푠, Equation (11) becomes +( +푙1 − 푙2 +) √ +푠 + +(푙3 + 푙4 − 푙5 − 푙6 +2 +) ++ +(푙3 − 푙4 + 푙5 − 푙6 +2 +) √ +1 + 4푠 = 0. +If 푠 and 4푠 + 1 are not perfect squares then {1, +√ +푠, +√ +1 + 4푠} is linearly independent over Q and +Equation (11) implies Equation (12). Hence there is pretty good state transfer from 푎 to 푏. +Case 3: Suppose 푚 = 3ℎ2, for some integer ℎ. Then 4ℎ2 + 1 is not a perfect square, and Equation (11) +becomes +(2ℎ(푙1 − 푙2) + 푙3 + 푙4 − 푙5 − 푙6 +2 +) ++ +(푙3 − 푙4 + 푙5 − 푙6 +2 +) √ +4ℎ2 + 1 = 0, +14 + +which implies 푙3 + 푙4 − 푙5 − 푙6 = −2ℎ(푙1 − 푙2). If ℎ = 3푘, for some integer 푘, then Equation (12) +holds and pretty good state transfer occurs from 푎 to 푏. +Suppose ℎ is not divisible by 3. Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0 and 푙3 = 푙6 = 1. +Since +6 +∑ +푟=1 +푙푟 +(푞푟(푎, 푏) + 훿) = 2훿, +Equation (7) holds if and only if 훿 ∈ Z휋. +Equation (11) also holds when 푙1 = 1, 푙2 = 푙3 = 푙4 = 0, 푙5 = 푙6 = ℎ, but +6 +∑ +푟=1 +푙푟(푞푟(푎, 푏) + 훿) = −4ℎ휋 +3 ++ (2ℎ + 1)훿 ≠ 0 +(mod 2휋) +when 훿 ∈ Z휋. We conclude that pretty good state transfer from 푎 to 푏 does not occur. +Case 4: Suppose 푚 = 3푠 with 4푠 + 1 = ℎ2, for some integer ℎ. Then 푠 is not a perfect square, and Equa- +tion (11) becomes +(푙1 − 푙2) +√ +푠 + (푙3 + 푙4 − 푙5 − 푙6) + ℎ(푙3 − 푙4 + 푙5 − 푙6) +2 += 0, +which implies 푙3 + 푙4 − 푙5 − 푙6 = −ℎ(푙3 − 푙4 + 푙5 − 푙6). If ℎ is divisible by 3 then Equation (12) +holds and pretty good state transfer occurs from 푎 to 푏. In this case, 푚 = 27푘2 + 27푘 + 6 if we write +4푠 + 1 = 32(2푘 + 1)2. +If ℎ is not divisible by 3, Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0, 푙3 = 푙6 = 1 and when +푙1 = 푙2 = 0, 푙3 = 푙4 = ℎ, 푙5 = −1 and 푙6 = 1. Using the same argument as in the previous case, we +see that there does not exist 훿 satisfying Equation (7) for both assignments for the 푙푗’s. We conclude +that pretty good state transfer from 푎 to 푏 does not occur. +5.2 +Circulants rooted with a looped path +In [4], Kempton et al. show that a path with a loop on each end-vertex with transcendental weight 훾 has +pretty good state transfer between the two end-vertices. We use 푃 훾 +푚 to denote the rooted path on vertices +{1, 2, … , 푚} that has root 푚 and a loop on vertex 1 with weight 훾. Then the path of length 2푚 − 1 with a +loop of weight 훾 on each end-vertex studied in [4] can be viewed as the rooted product of 퐾2 with 푃 훾 +푚. +Path 푃 훾 +푚 rooted at 푚 with a loop at 1 +1 +2 +푚 +훾 +15 + +We extend their result to the rooted product 푋◦푃 훾 +푚 where 푋 is Hermitian circulant with rational eigen- +values that admits universal perfect state transfer. Orthogonal polynomials and field trace are the main tools +used in this section. Please see Chapter 8 of [17] for the background of orthogonal polynomials, and see [4] +and Chapter 14 of [18] for some basic facts on field trace. +Suppose 푉 (푋) = {푥0, 푥1, … , 푥푛−1}. Then we label the vertices of 푋◦푃 훾 +푚 with the ordered pair (푥ℎ, 푗) +denoting the 푗-th vertex on 푃 훾 +푚 that is rooted at 푥ℎ in 푋, for ℎ = 0, 1, … , 푛 − 1 and 푗 = 1, … , 푚. +(푥0, 푚) +(푥1, 푚) +(푥2, 푚) +(푥0, 1) +(푥0, 2) +훾 +(푥1, 1) +(푥1, 2) +훾 +(푥2, 1) +(푥2, 2) +훾 +The rooted product of ⃖⃖⃗ +퐾3 with 푃 훾 +푚 +Let 퐻푋 be the matrix of the Hermitian circulant 푋 with universal perfect state transfer. It follows from +Theorem 8 of [1] that the eigenvalues of 퐻푋 are simple. Given distinct eigenvalues 휃0, 휃1, … , 휃푛−1 of 퐻푋 +and the discrete Fourier matrix of order 푛 +퐹푛 = +1 +√ +푛 +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +1 +1 +1 +⋯ +1 +1 +휁 +휁2 +⋯ +휁푛−1 +1 +휁2 +휁4 +⋯ +휁2(푛−1) +⋮ +⋮ +⋮ +⋱ +⋮ +1 +휁푛−1 +휁2(푛−1) +⋯ +휁(푛−1)2 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +where 휁 = 푒2휋i∕푛, we can write +퐻푋 = 퐹푛 +⎡ +⎢ +⎢ +⎢⎣ +휃0 +0 +⋯ +0 +0 +휃1 +⋯ +0 +⋮ +⋮ +⋱ +⋮ +0 +0 +⋯ +휃푛−1 +⎤ +⎥ +⎥ +⎥⎦ +퐹 ∗ +푛 . +For 0 ≤ 푎, 푏 ≤ 푛 − 1, the vertices 푥푎 and 푥푏 are strongly cospectral with quarrel +푞푗(푥푎, 푥푏) = 2휋푗(푏 − 푎) +푛 +, +(13) +for 푗 = 0, 1, … , 푛 − 1. +Theorem 22 of [1] gives the following characterization of Hermitian circulants that have universal perfect +state transfer. +Theorem 5.5. Let 푋 be a Hermitian circulant on 푛 vertices with simple eigenvalues 휃0, … , 휃푛−1. Then 푋 +has universal perfect state transfer if and only if there exist 훼, 훽 ∈ R with 훽 > 0, 푐0, … , 푐푛−1 ∈ Z and integer +ℎ coprime with 푛 such that +휃푗 = 훼 + 훽 (푗ℎ + 푐푗푛) , +for 푗 = 0, … , 푛 − 1. +16 + +To determine the spectrum of 푍 = 푋◦푃 훾 +푚, we consider the 푚 × 푚 Jacobi matrices +푇푗 ∶= +⎡ +⎢ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +훾 +1 +0 +⋯ +0 +0 +1 +0 +1 +⋯ +0 +0 +0 +1 +0 +⋯ +0 +0 +⋮ +⋮ +⋮ +⋱ +⋮ +⋮ +0 +0 +0 +⋯ +0 +1 +0 +0 +0 +⋯ +1 +휃푗 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +, +for 푗 = 0, 1, … , 푛 − 1. +(14) +Let 휑푗,0 = 1 and let 휑푗,푟(푡) be the characteristic polynomial of the 푟-th leading principal submatrix of 푇푗, for +푟 = 1, … , 푚. Then 휑푗,0(푡), 휑푗,1(푡), … , 휑푗,푚(푡) is a sequence of orthogonal polynomials satisfying 휑푗,0(푡) = 1, +휑푗,1(푡) = 푡 − 훾, +휑푗,푟(푡) = 푡 휑푗,푟−1(푡) − 휑푗,푟−2(푡) +(15) +for 푟 = 2, … , 푚 − 1, and +휑푗,푚(푡) = (푡 − 휃푗 +) 휑푗,푚−1(푡) − 휑푗,푚−2(푡). +(16) +From Lemma 8.5.2 of [17], the roots 휆푗,1, … , 휆푗,푚 of 휑푗,푚(푡) = 0 are the eigenvalues of 푇푗. Further, +Φ푗,푠 = +[1 +휑푗,1(휆푗,푠) +… +휑푗,푚−1(휆푗,푠)]푇 +is an eigenvector of 푇푗 corresponding to eigenvalue 휆푗,푠, for 푠 = 1, … , 푚. It follows from Lemma 8.1.1 of +[17] that the eigenvalues of 푇푗 are simple. It is also known that consecutive orthogonal polynomials do not +have non-trivial common factor. +The Hermitian matrix of 푍 is +퐻푍 = +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +0 +0 +⋯ +0 +0 +0 +0 +⋯ +0 +0 +⋮ +⋮ +⋱ +⋮ +⋮ +0 +0 +⋯ +0 +0 +0 +0 +⋯ +0 +1 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +⊗ 퐻푋 + +⎡ +⎢ +⎢ +⎢ +⎢ +⎢⎣ +훾 +1 +⋯ +0 +0 +1 +0 +⋯ +0 +0 +⋮ +⋮ +⋱ +⋮ +⋮ +0 +0 +⋯ +0 +1 +0 +0 +⋯ +1 +0 +⎤ +⎥ +⎥ +⎥ +⎥ +⎥⎦ +⊗ 퐼푛. +(17) +Since 퐻푋퐹푛푒푗 = 휃푗퐹푛푒푗, we have +퐻푍 +(Φ푗,푠 ⊗ 퐹푛푒푗 +) = 휆푗,푠 +(Φ푗,푠 ⊗ 퐹푛푒푗 +) +(18) +for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚. +Lemma 5.6. Let 푋 be a Hermitian circulant with distinct eigenvalues 휃0, 휃1, … , 휃푛 and let 퐹푛, 휆푗,푠, and Φ푗,푠 +be defined as above. For 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚, 휆푗,푠 is a simple eigenvalue of the Hermitian +graph 푍 defined in Equation (17), with spectral decomposition +퐻푍 = +푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +휆푗,푠 +1 +‖Φ푗,푠‖2 +( +Φ푗,푠Φ∗ +푗,푠 +) +⊗ +( +(퐹푛푒푗)(퐹푛푒푗)∗) +. +For 푥푎, 푥푏 ∈ 푉 (푋) and ℎ = 1, … , 푚, the vertices (푥푎, ℎ) and (푥푏, ℎ) are strongly cospectral in 푍 with +quarrel corresponding to eigenvalues 휆푗,푠 being +푞푗,푠 +( +(푥푎, ℎ), (푥푏, ℎ) +) += 2휋푗(푏 − 푎) +푛 +, +for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚. +17 + +Proof. It is sufficient to show that the eigenvalues 휆푗,푠 of 푍, for 푗 = 0, … , 푛−1 and 푠 = 1, … , 푚, are distinct. +Supoose 휆푗1,푠1 = 휆푗2,푠2. From Equation (15), we have +휑푗1,푟 +(휆푗1,푠1 +) = 휑푗2,푟 +(휆푗2,푠2 +) , +for 푟 = 1, … , 푚 − 1. From Equation (16), 휑푗1,푚 +(휆푗1,푠1 +) = 휑푗2,푚 +(휆푗2,푠2 +) = 0 implies 휃푗1 = 휃푗2 and 푗1 = 푗2. +Since 휑푗1,푚(푡) = 0 has 푚 distinct roots, we conclude that 푠1 = 푠2. +We get the quarrels of 푍 directly from Equations (18) and (13). +For the rest of this section, we assume that 훾 is transcendental and 휃0, 휃1, … , 휃푛−1 ∈ Q as in Theorem 5.8. +Applying Laplace expansion along the first two rows of 푇푗 in Equation (14) gives +휑푗,푚(푡) = (푡 − 훾)푔푛−1(푡) − 푔푛−2(푡), +where 푔푛−1(푡) is the characteristic polynomial of the (푛 − 1) × (푛 − 1) Jacobi matrix +⎛ +⎜ +⎜ +⎜ +⎜ +⎜⎝ +휃푗 +1 +⋯ +0 +0 +1 +0 +⋯ +0 +0 +⋮ +⋮ +⋱ +⋮ +⋮ +0 +0 +⋯ +0 +1 +0 +0 +⋯ +1 +0 +⎞ +⎟ +⎟ +⎟ +⎟ +⎟⎠ +, +and 푔푛−2(푡) is the characteristic polynomial of its (푛 − 2)-th leading principal submatrix. Now 푔푛−1(푡) and +푔푛−2(푡) are consecutive orthogonal polynomials, so they do not have any common factor of positive degree. +Since 푔푛−1(푡) and 푔푛−2(푡) are rational polynomials and 훾 is transcendental, we conclude that 휑푗,푚(푡) is irre- +ducible over Q(훾). Then the splitting field 퐹푗 of 휑푗,푚(푡) is a Galois extension over Q(훾). +Given a Galois extension 퐸∕퐾, we use Tr퐸∕퐾(휇) to denote the trace of 휇 from 퐸 to 퐾. Here are some +properties of the trace map useful for the proof of Theorem 5.8. +Theorem 5.7. Let 퐸∕퐾 be a Galois extension. The following properties hold. +i. For 휇 ∈ 퐸, Tr퐸∕퐾(휇) ∈ 퐾. +ii. For 휇 ∈ 퐾, Tr퐸∕퐾(휇) = [퐸 ∶ 퐾]휇. +iii. For 휇1, 휇2 ∈ 퐸, Tr퐸∕퐾(휇1 + 휇2) = Tr퐸∕퐾(휇1) + Tr퐸∕퐾(휇2). +iv. If 퐾 ⊂ 퐹 ⊂ 퐸 are extension fields, then Tr퐸∕퐾(휇) = Tr퐹∕퐾 +( +Tr퐸∕퐹 (휇) +) +. +v. If the minimal polynomial of 휇 ∈ 퐸 over 퐾 is 푡푚 + 푎푚−1푡푚−1 + ⋯ + 푐0 then +Tr퐸∕퐾(휇) = −[퐸 ∶ 퐾] +푚 +푎푚−1. +The eigenvalue 휆푗,푠 of 푋◦푃 훾 +푚 has minimal polynomial 휑푗,푚(푡) over Q(훾). Applying Property (v) to 휆푗,푠 ∈ +퐹푗, Equation (16) gives +Tr퐹푗∕Q(훾)(휆푗,푠) = +[퐹푗 ∶ Q(훾)] +푚 +( +훾 + 휃푗 +) +. +(19) +Consider the smallest extension field 푀 of 퐹푗 that contains 퐹0, … , 퐹푛−1. For 푗 = 0, … , 푛 − 1, 푀∕퐹푗 is a +Galois extension. It follows from Properties (ii) and (iv) and Equation (19) that +Tr푀∕Q(훾)(휆푗,푠) = Tr퐹푗∕Q(훾) +( +[푀 ∶ 퐹푗]휆푗,푠 +) += [푀 ∶ 퐹푗] +[퐹푗 ∶ Q(훾)] +푚 +( +훾 + 휃푗 +) += [푀 ∶ Q(훾)] +푚 +( +훾 + 휃푗 +) +. +(20) +18 + +Theorem 5.8. Let 푋 be a Hermitian circulant on 푛 vertices that admits universal perfect state transfer with +eigenvalues given in Theorem 5.5. If 휃0, … , 휃푛−1 ∈ Q and 훾 is transcendental then, for any positive integer +푚, the rooted product 푋◦푃 훾 +푚 has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)}, +for 1 ≤ ℎ ≤ 푚. +Proof. For ℎ = 1, … , 푚, 푋◦푃 훾 +푚 has an automorhism that maps (푥푎, ℎ) to (푥푎+1, ℎ), for 푎 ∈ Z푛. It is sufficient +to show that there is pretty good state transfer from (푥0, ℎ) to (푥1, ℎ). By Lemma 5.6, (푥0, ℎ) and (푥1, ℎ) are +strongly cospectral with quarrels +푞푗,푠 +( +(푥0, ℎ), (푥1, ℎ) +) += 2휋푗 +푛 , +for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚. +To show the Theorem 2.4 (ii) holds, consider integers 푙푗,푠’s satisfying +푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠휆푗,푠 = 0. +(21) +We apply the trace from 푀 to Q(훾) to both sides. Applying Theorem 5.7 (iii) and Equation (20), Equa- +tion (21) implies +푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠(훾 + 휃푗) = 훾 +(푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠 +) ++ +푛−1 +∑ +푗=0 +휃푗 +( 푚 +∑ +푠=1 +푙푗,푠 +) += 0. +Since 훾 is transcendental and ∑ +푗 휃푗 +(∑ +푠 푙푗,푠 +) ∈ Q, Equation (21) is equivalent to +푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠 = 0 +(22) +and +푛−1 +∑ +푗=0 +휃푗 +( 푚 +∑ +푠=1 +푙푗,푠 +) += 0. +(23) +Recall 휃푗 = 훼 + 훽(푗ℎ + 푐푗푛) where gcd(ℎ, 푛) = 1. Equations (22) and (23) imply +푛−1 +∑ +푗=0 +(푗ℎ + 푐푗푛) +( 푚 +∑ +푠=1 +푙푗,푠 +) += 0. +Since gcd(ℎ, 푛) = 1, we have +푛−1 +∑ +푗=0 +푗 +푚 +∑ +푠=1 +푙푗,푠 = 0 +(mod 푛). +If Equations (22) and (23) hold then, for any 훿 ∈ R, +푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠 +( +푞푗,푠 +( +(푥0, ℎ), (푥1, ℎ) +) ++ 훿 +) += 2휋 +푛 +(푛−1 +∑ +푗=0 +푗 +푚 +∑ +푠=1 +푙푗,푠 +) ++ 훿 +(푛−1 +∑ +푗=0 +푚 +∑ +푠=1 +푙푗,푠 +) += 0 +(mod 2휋). +By Theorem 2.4, pretty good state transfer occurs from (푥0, ℎ) to (푥1, ℎ), for ℎ = 1, … , 푚. +19 + +Remark 5.9. +• Putting a transcendental weight 훾 on the loops is sufficient for 휑0,푚(푡), … , 휑푛−1,푚(푡) to be irreducible +over Q(훾). Theorem 5.8 holds for irrational number 훾 as long as 휑0,푚(푡), … , 휑푛−1,푚(푡) are irreducible +over Q(훾). +• If we move the loops from the (푥푎, 1) to (푥푎, 푚), for 푎 = 0, … , 푛−1, then a similar argument shows that +the resulting graph has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)}, for +ℎ = 1, … , 푚. +Acknowledgements +This project was completed under the 2021 Fields Undergraduate Summer Research Program which provided +support for A. Acuaviva, S. Eldridge, M. How and E. Wright. C. Godsil gratefully acknowledges the support +of the Natural Sciences and Engineering Council of Canada (NSERC) Grant No. RGPIN-9439. A. Chan is +grateful for the support of the NSERC Grant No. RGPIN-2021-03609. +References +[1] +S. Cameron, S. Fehrenbach, L. Granger, S. Shrestha, and C. Tamon, “Universal state transfer on graphs,” +Linear Algebra and Its Applications, vol. 455, pp. 115–142, 2014. +[2] +C. Godsil, “Real state transfer,” arXiv1710:04042. +[3] +X. Fan and C. Godsil, “Pretty good state transfer on double stars,” Linear Algebra and Its Applications, +vol. 438, pp. 2346–2358, 2013. +[4] +M. Kempton, G. Lippner, and S.-T. Yau, “Pretty good quantum state transfer in symmetric spin net- +works via magnetic field,” Quantum Inf. Process., vol. 16, no. 9, Paper No. 210, 23, 2017. +[5] +Z. Zimborás, M. Faccin, Z. Kádár, J. Whitfield, B. Lanyon, and J. Biamonte, “Quantum transport +enchancement by time-reversal symmetry breaking,” Scientific Reports, vol. 3, p. 2361, 2013. +[6] +A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, “Exponential algo- +rithmic speedup by a quantum walk,” Proceedings of the thirty-fifth ACM symposium on Theory of +computing, 2003. +[7] +A. Childs, “Universal computation by quantum walk,” Physical Review Letters, vol. 102, p. 180 501, +2009. +[8] +S. Bose, “Quantum communication through an unmodulated spin chain,” Physical Review Letters, +vol. 91, 20 2003. +[9] +A. Kay, “Perfect, efficient state transfer and its application as a constructive tool,” International Jour- +nal of Quantum Information, vol. 8, pp. 641–676, 4 2011. +[10] +E. Connelly, N. Grammel, M. Kraut, L. Serazo, and C. Tamon, “Universality in perfect state transfer,” +Linear Algebra and Its Applications, vol. 531, pp. 516–532, 2017. +[11] +C. Godsil and S. Lato, “Perfect state transfer on oriented graphs,” Linear Algebra and its Applications, +vol. 604, pp. 278–292, 2020. +20 + +[12] +C. Godsil and B. McKay, “A new graph product and its spectrum,” Bulletin of The Australian Math- +ematical Society, vol. 18, Feb. 1978. +[13] +C. Godsil, “State Transfer on Graphs,” Discrete Mathematics, vol. 312, pp. 123–147, 2012. +[14] +B. M. Levitan and V. V. Zhikov, Almost periodic functions and differential equations. Cambridge +University Press, Cambridge-New York, 1982, pp. xi+211. +[15] +C. van Bommel, “quantum walks and pretty good state transfer on paths,” Ph.D. dissertation, Univer- +sity of Waterloo, 2019. +[16] +G. Coutinho and C. Godsil, “graph spectra and continuous quantum walks,” preprint. +[17] +C. Godsil, Algebraic Combinatorics. New York: Chapman & Hall, 1993, Chapman and Hall Mathe- +matics Series. +[18] +D. S. Dummit and R. M. Foote, Abstract algebra, Third. John Wiley & Sons, Inc., Hoboken, NJ, 2004, +pp. xii+932. +21 + diff --git a/89AzT4oBgHgl3EQfgvxw/content/tmp_files/load_file.txt b/89AzT4oBgHgl3EQfgvxw/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a8af95895eba172865448915bcbb46ff4896ddc --- /dev/null +++ b/89AzT4oBgHgl3EQfgvxw/content/tmp_files/load_file.txt @@ -0,0 +1,624 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf,len=623 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='01473v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='CO] 4 Jan 2023 State Transfer in Complex Quantum Walks Antonio Acuaviva1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Ada Chan2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Summer Eldridge3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Chris Godsil4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Matthew How-Chun-Lun5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Christino Tamon6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Emily Wright7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' and Xiaohong Zhang8 1Department of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Universidad Complutense de Madrid 2Department of Mathematics and Statistics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' York University 3Department of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' University of Toronto 4Department of Combinatorics and Optimization,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' University of Waterloo 5Department of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' McMaster University 6Department of Computer Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Clarkson University 7Department of Mathematics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Queen’s University 8Centre de recherches mathématiques,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Université de Montréal January 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2023 Abstract Given a graph with Hermitian adjacency matrix 퐻,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' perfectstate transfer occurs fromvertex 푎 to vertex 푏 if the (푏,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푎)-entryof the unitary matrix exp(−푖퐻푡) has unit magnitudefor some time 푡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This phenomenon is relevant for information transmission in quantum spin networks and is known to be monogamous under real symmetric matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We prove the following results: For oriented graphs (whose nonzero weights are ±푖), the oriented 3-cycle and the oriented edge are the only graphs where perfect state transfer occurs between every pair of vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This settles a conjecture of Cameron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' On the other hand, we construct an infinite family of oriented graphs with perfect state transfer between any pair of vertices on a subset of size four.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' There are infinite families of Hermitian graphs with one-way perfect state transfer, where perfect state transfer occurs without periodicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In contrast, perfect state transfer implies periodicity when- ever the adjacency matrix has algebraic entries (see Godsil [2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' There are infinite families with non-monogamous pretty good state transfer in rooted graph prod- ucts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In particular, we generalize known results on double stars (due to Fan and Godsil [3]) and on paths with loops (due to Kempton, Lippner and Yau [4]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The latter extends the experimental observation of quantum transport (made by Zimborás et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [5]) and shows non-monogamouspretty good state transfer can occur amongst distant vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 1 Introduction Given a graph 푋 = (푉 , 퐸) with adjacency matrix 퐴, a continuous-time quantum walk on 푋 is defined by the time-dependent unitary matrix 푈(푡) = 푒−푖퐴푡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This natural quantum generalization of continuous-time ran- dom walks is important for designing quantum algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Childs et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [6] showed that a continuous-time quantum walk algorithm provides an exponential time speedup for an explicit search problem on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 1 Subsequently, Childs [7] showed that continuous-time quantum walk is a universal model of quantum com- putation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Our focus in this paper is motivated by Bose [8] who studied quantum communication via continuous- time quantum walk on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We say that there is pretty good state transfer in a graph 푋 from vertex 푎 to vertex 푏 if for any 휖 > 0, there is a time 푡 so that ‖‖푈(푡)푒푎 − 훾푒푏‖‖ ≤ 휖 where 훾 is a phase factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Here, 푒푎 denotes the unit vector with 1 at position 푎 and 0 elsewhere;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' similarly for 푒푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 휖 = 0 is achievable, we say there is perfect state transfer in 푋 from 푎 to 푏 at time 푡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Kay [9] proved a monogamy property for perfect state transfer on graphs with real symmetric adjacency matrices: if there is perfect state transfer from 푎 to 푏 and from 푎 to 푐 then 푏 = 푐.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In contrast, Cameron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [1] showed that there are oriented graphs (whose adjacency matrices are Hermitian with ±푖 nonzero entries) where state transfer occurs between every pair of vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This latter property is called universal state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Their primary examples are oriented cycles of prime order with universal pretty good state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' A notable exception is the oriented 3-cycle which exhibits universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It was conjectured in [1] that the oriented 퐾2 and 3-cycle are the only oriented graph with universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We prove their conjecture in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This confirms that universal perfect state transfer is an extremely rare phenomenon in oriented graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' On the other hand, there are known infinite families of graphs with universal perfect state transfer but with adjacency matrices that are Hermitian matrices with no restriction on the entries (see Connelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We call these Hermitian graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil and Lato [11] proved a strong characterization of perfect state tranfer in oriented graphs and observed that perfect state transfer always implies periodicity (by the Gelfond-Schneider theorem).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In fact, Godsil [2] had observed the latter property holds for any adjacency matrix with algebraic entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Our next observation shows that the latter assumption is necessary to guarantee periodicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We construct the first infinite family of Hermitian graphs with one-way perfect state transfer, where perfect state transfer occurs without periodicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' These examples also exhibit a one-time perfect state transfer property where perfect state transfer occurs at a single unique time (and never to repeat again).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil and Lato [11] also introduced a relaxation of universal perfect state transfer called multiple perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We say a graph 푋 has multiple state transfer on a subset 푆 ⊂ 푉 (푋) of vertices, with |푆| ≥ 3, if state transfer occurs between every pair of vertices of 푆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' An explicit example of a 8-vertex circulant with multiple perfect state tranfer was given in [11], but it was not clear if there are more examples sharing the same properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We construct the first infinite family of oriented graphs with multiple perfect state transfer (which contains the aforementioned 8-vertex circulant as a special case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This shows that, unlike universal perfect state transfer, multiple perfect state transfer is not an extremely rare phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It is known that perfect state transfer is closed under the Cartesian graph product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In this work, under mild assumptions, we show that multiple state transfer is closed under the rooted graph product (see Godsil and McKay [12]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' First, we prove a complete characterization of pretty good state transfer on the rooted product of the oriented 3-cycle with stars 퐾1,푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This generalizes a result of Fan and Godsil [3] on the double stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Next, we consider rooted product with single-looped paths instead of stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be a 푛-vertex circulant with universal perfect state transfer and let 푃 훾 푚 be a 푚-vertex path with a self-loop of weight 훾 at one of its endpoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We prove that the rooted product 푋◦푃 훾 푚 has multiple pretty good state transfer between every pair of vertices with self-loop provided 훾 is transcendental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This generalizes a result of Kempton, Lippner and Yau [4] and shows the power of loops to facilitate multiple state transfer among distant vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In the special case when 푋 is the oriented 3-cycle, our result strengthens the experimental observations in Zimborás et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [5] (with the help of self-loops).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2 2 Preliminary Given a graph 푋 and an associated Hermitian matrix 퐻, the transition matrix of its continuous-time quantum walk is 푈(푡) = 푒−i푡퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We call 푋 a Hermitian graph if we do not assume any additional condition on the entries of 퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For the special case where 푋 is an oriented graph, we use the Hermitian matrix 퐻 defined as 퐻푎,푏 = ⎧ ⎪ ⎨ ⎪⎩ i if there is an arc from 푎 to 푏 in 푋, −i if there is an arc from 푏 to 푎 in 푋, and 0 if there is no arc between 푎 and 푏 in 푋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휃1, … , 휃푑 be distinct eigenvalues of 퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푟 = 1, … , 푑, let 퐸푟 denote the orthogonal projection matrix onto the 휃푟-eigenspace of 퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 퐸푟퐸푠 = 훿푟,푠퐸푟 and ∑ 푟 퐸푟 = 퐼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The spectral decomposition 퐻 = ∑ 푟 휃푟퐸푟 gives 푈(푡) = 푑 ∑ 푟=1 푒−i푡휃푟퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given a unit vector 푣 ∈ ℂ푛, the system with initial state 푣 evolves to 푈(푡)푣 = ∑ 푟 푒−푖푡휃푟퐸푟푣 at time 푡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Therefore the pair (휃푟, 퐸푟) with 퐸푟푣 = 0 does not influence the state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We define the eigenvalue support of the vector 푣 to be Φ푣 = {휃푟 ∶ 퐸푟푣 ≠ 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In the case 푣 = 푒푎 for some vertex 푎, we also call Φ푒푎 (Φ푎 for short) the eigenvalue support of 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Perfect state transfer from vertex 푎 to vertex 푏 occurs at time 휏 if 푈(휏)푒푎 = 훼푒푏, (1) for some phase factor 훼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 푎 = 푏 then we say the quantum walk is periodic at 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Multiplying 퐸푟 to both sides of Equation (1) gives 푒−i휏휃푟퐸푟푒푎 = 훼퐸푟푒푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (2) Hence, for 푟 = 1, … , 푑, there exists 푞푟(푎, 푏) ∈ [0, 2휋) such that 퐸푟푒푎 = 푒i푞푟(푎,푏)퐸푟푒푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (3) We say the vertices 푎 and 푏 are strongly cospectral when this condition is satisfied, and call 푞푟(푎, 푏) the quarrel from 푎 to 푏 relative to the eigenvalue 휃푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Note that strongly cospectral vertices have the same eigenvalue support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We study perfect state transfer in oriented graphs and in Hermitian graphs in Sections 3 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We give here a characterization of perfect state transfer in Hermitian graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Perfect state transfer occurs from 푎 to 푏 in a Hermitian graph 푋 if and only if i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푎 and 푏 are strongly cospectral vertices with quarrels 푞푟(푎, 푏), for 휃푟 ∈ Φ푎, and ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁, there exist integers 푚푟,푠 and 푚ℎ,퓁 satisfying 휃푟 − 휃푠 휃ℎ − 휃퓁 = 푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋 푞ℎ(푎, 푏) − 푞퓁(푎, 푏) + 2푚ℎ,퓁휋 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' From Equation (3), we see that perfect state transfer from 푎 to 푏 implies they are strongly cospectral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푎 and 푏 are strongly cospectral with quarrel 푞푟(푎, 푏), for 휃푟 ∈ Φ푎(= Φ푏).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then Equation (1) holds if and only if for 휃푟, 휃푠 ∈ Φ푎, 훼 = 푒i(푞푟(푎,푏)−휏휃푟) = 푒i(푞푠(푎,푏)−휏휃푠).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (4) This is equivalent to 푒i휏(휃푟−휃푠) = 푒i(푞푟(푎,푏)−푞푠(푎,푏)) and 휏 ( 휃푟 − 휃푠 ) = 푞푟(푎, 푏) − 푞푠(푎, 푏) + 2푚푟,푠휋, for some integer 푚푟,푠.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Condition (ii) follows immediately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We say the ratio condition on Φ푎 holds if 휃푟 − 휃푠 휃ℎ − 휃퓁 ∈ Q (5) for 휃푟, 휃푠, 휃ℎ, 휃퓁 ∈ Φ푎 such that ℎ ≠ 퓁.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In a Hermitian graph 푋, 푎 is periodic if and only if the ratio condition on Φ푎 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Note that 푞푟(푎, 푎) = 0 for 휃푟 ∈ Φ푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The result follows immediately from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In Section 5, we consider a relaxation of perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' A graph has pretty good state transfer from 푎 to 푏 if, for any 휀 > 0, there is a time 휏 satisfying |푈(휏)푎,푏| ≥ 1 − 휀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (6) Using the proof of Lemma 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 in [13], we conclude that if there is pretty good state transfer from 푎 to 푏 then 푎 and 푏 are strongly cospectral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' From 푈(푡)푎,푏 = 푑 ∑ 푟=1 푒−i푡휃푟푒푇 푎 퐸푟푒푏 = 푑 ∑ 푟=1 푒i(푞푟(푎,푏)−푡휃푟)(퐸푟)푏,푏, we see that there is pretty good state transfer from 푎 to 푏 if and only if for any 휖 > 0, there exists 휏 > 0 and 훿휖 ∈ R such that |휏휃푟 − 푞푟(푎, 푏) − 훿휖| < 휖 (mod 2휋), for 푟 ∈ Φ푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (Kronecker [14]) Let 휃1, … , 휃푑 and 푞1, … , 푞푑 be arbitrary real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For any 휖 > 0, the system of inequalities |휃푟휏 − 푞푟| < 휖 (mod 2휋), 푟 = 1, … , 푑 admits a solution for 휏 if and only if, for all set of integers 푙1, … , 푙푑, 푙1휃1 + … + 푙푑휃푑 = 0 implies 푙1푞1 + … + 푙푑푞푑 = 0 (mod 2휋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 4 Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be Hermitian graph with eigenvalues 휃1, … , 휃푑 ∈ Φ푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푋 has pretty good state transfer from 푎 to 푏 if and only if the following conditions hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The vertices 푎 and 푏 are strongly cospectral with quarrels 푞푟(푎, 푏), for 푟 = 1, … , 푑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' There exists 훿 ∈ R such that, for all integers 푙1, … , 푙푑 satisfying ∑푑 푟=1 푙푟휃푟 = 0, we have 푑 ∑ 푟=1 푙푟 ( 푞푟(푎, 푏) + 훿 ) = 0 (mod 2휋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (7) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The result follows from Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='01 of [15] and Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푆 be a set of vertices in 푋, we say multiple pretty good state transfer occurs on 푆 if there is pretty good state transfer between any two vertices in 푆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Section 5 gives two families of Hermitian graphs that have multiple pretty good state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3 Perfect state transfer in oriented graphs For graphs with real symmetric adjacency matrix, Kay shows that perfect state transfer cannot happen from one vertex to two distinct vertices [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' This monogamous behaviour does not hold in Hermitian graphs with non-real entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' A graph has multiple perfect state transfer on a set 푆 of at least three vertices if there is perfect state transfer between any two vertices in 푆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' When 푆 = 푉 (푋), we say 푋 has universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 22 of [10] gives a construction of Hermitian circulants that admit universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The oriented 3-cycle is a special case of this construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In the same paper, Cameron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' conjecture that the oriented 퐾2 and the oriented 퐾3 are the only oriented graphs that can have universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We confirm this conjecture in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In [11], Godsil and Lato investigated multiple perfect state transfer in oriented graph where 푆 is a proper subset of 푉 (푋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' They give an example of an oriented graph on eight vertices that admits multiple perfect state transfer on a set of four vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2, we extend their example to an infinite family of oriented graphs that have multiple perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 Universal perfect state transfer In [1], Cameron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' show that the oriented 퐾2 and 퐾3 with any orientation admit universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' They give the following necessary conditions on the Hermitian graphs admitting universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be the matrix associated with a Hermitian graph 푋 that admits universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the following holds: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' All eigenvalues of 퐻 are simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 푃 is a unitary matrix diagonalizing 퐻 then |푃푎,푏| = 1 √ 푛, for 푎, 푏 ∈ 푉 (푋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Every vertex in 푋 is periodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 5 Suppose 푋 is an oriented graph on 푛 vertices that has universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be its associated Hermitian matrix with spectral decomposition 퐻 = 푛 ∑ 푟=1 휃푟퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 퐸푟 has rank one with constant diagonal entries 푛−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We see that 퐻2 has constant diagonal entries and the underlying (undirected) graph of 푋 is regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Further, it follows from Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 of [11] that there exists a positive square-free integer Δ such that 휃푟 ∈ Z √ Δ, for 푟 = 1, … , 푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Hence min 푟≠푠 |휃푟 − 휃푠| ≥ √ Δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (8) We show in the following lemmas that an oriented graph with universal perfect state transfer can have at most eleven vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be a Hermitian matrix of order 푛 with zero diagonal entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휃1 ≤ 휃2 ≤ ⋯ ≤ 휃푛 be the eigenvalues of 퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푛 ∑ 푟,푠=1 ( 휃푟 − 휃푠 )2 = 2푛 Tr(퐻2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Observe that 휃푟 − 휃푠 is an eigenvalue of (퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻), for 푟, 푠 = 1 … , 푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Hence 푛 ∑ 푟,푠=1 ( 휃푟 − 휃푠 )2 = Tr ( 퐻 ⊗ 퐼푛 − 퐼푛 ⊗ 퐻 )2 = Tr ( 퐻2 ⊗ 퐼푛 + 퐼푛 ⊗ 퐻2 − 2퐻 ⊗ 퐻 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The result follows from Tr(퐻 ⊗ 퐻) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be an oriented graph on 푛 vertices and 푚 edges with eigenvalues 휃1 < ⋯ < 휃푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휎 = min푟≠푠 |휃푟 − 휃푠|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 휎2 푛(푛2 − 1) 24 ≤ 푚 and 휎2 ≤ 12 푛 + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from the definition of 휎 that 휎|푟 − 푠| ≤ |휃푟 − 휃푠|, and 휎2 푛 ∑ 푟,푠=1 (푟 − 푠)2 ≤ 푛 ∑ 푟,푠=1 (휃푟 − 휃푠 )2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The lower bound is 휎2 푛 ∑ 푟,푠=1 (푟 − 푠)2 = 휎2 ⎛ ⎜ ⎜⎝ 2푛 푛 ∑ 푟=1 푟2 − 2 ( 푛 ∑ 푟=1 푟 )2⎞ ⎟ ⎟⎠ = 휎2푛2(푛2 − 1) 6 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 gives 휎2 푛2(푛2 − 1) 6 ≤ 2푛 Tr(퐻2) = 4푚푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The second inequality in the lemma follows immediately from 푚 ≤ (푛 2 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 6 Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be an oriented graph on 푛 vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 푋 admits universal perfect state transfer then 푛 ≤ 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Further, if 푛 ≥ 6 then 푋 has integral eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Equation (8) that 휎2 ≥ Δ ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The second inequality of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3 gives 푛 ≤ 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' When 푛 ≥ 6, we have 휎2 < 2 which implies Δ = 1 and the eigenvalues of 푋 are integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We are ready to rule out universal perfect state transfer in oriented graphs on more than three vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The oriented 퐾2 and 퐾3 are the only oriented graphs admitting universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푋 is an oriented graph on 푛 vertices that admits universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the underlying graph of 푋 is 푘-regular, for some integer 푘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휃1 < ⋯ < 휃푛 be the eigenvalues of the Hermitian matrix 퐻 associated with 푋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 휃푟 ∈ Z √ Δ, for some positive square-free integer Δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since i퐻 is a skew-symmetric matrix with entries ±1, we have 휃푟 = −휃푛+1−푟 for 푟 = 1, … , 푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (9) Further, the characteristic polynomial of i퐻 is equal to the characteristic polynomial of its underlying graph over Z2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' When 푛 = 4 or 5, 퐶푛 and 퐾푛 are the only regular graphs on 푛 vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' An exhaustive search rules out oriented graphs on 4 or 5 vertices with spectrum satisying the above conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푛 ≥ 6, it follows from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3 and Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 that 휎 = min푟≠푠 |휃푟 − 휃푠| = 1 and 푛2 − 1 12 ≤ 푘 ≤ 푛 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Using this inequality together with the fact that 푘 is even when 푛 is odd, we narrow down to the following possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푛 6 7 8 9 10 11 푘 3, 4, 5 4, 6 6, 7 8 9 10 Applying Equation (9) to Tr(퐻2) yields 푛푘 = 2 ⌊ 푛+1 2 ⌋ ∑ 푟=1 휃2 푟 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Direct computation returns integral solutions to this equation for only three cases: 푛 푘 underlying graph Possible spectrum of i퐻 11 10 퐾11 0, ±i, ±2i, ±3i, ±4i, ±5i 7 6 퐾7 0, ±i, ±2i, ±4i 7 4 퐶7 0, ±i, ±2i, ±3i It is straightforward to check that for each case, the characteristic polynomial of the underlying graph is not equal to the polynomial with the roots listed in the table over Z2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We conclude that there is no oriented graph on 푛 ≥ 4 vertices admitting universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 Multiple perfect state transfer In [11], Godsil and Lato relax the notion of universal perfect state transfer to multiple perfect state transfer on a subset of vertices in oriented graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻⃖⃗퐶4 = ⎡ ⎢ ⎢ ⎢⎣ 0 −i 0 i i 0 −i 0 0 i 0 −i −i 0 i 0 ⎤ ⎥ ⎥ ⎥⎦ be the Hermitian matrix of the directed 4-cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' They show that the oriented graph with Hermitian matrix [1 0 0 1 ] ⊗ 퐻⃖⃗퐶4 + [ 0 i −i 0 ] ⊗ 퐽4 has multiple perfect state transfer on a set of four vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Making use of this technical lemma from [16], we extend the above example to an infinite family of oriented graphs where multiple perfect state transfer occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐴 and 퐵 be Hermitian matrices where 퐴 has spectral decomposition 퐴 = ∑ 푟 휃푟퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푒−푖푡(퐴⊗퐵) = ∑ 푟 퐸푟 ⊗ 푒−푖푡휃푟퐵.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푋 is an oriented graph on 푛 vertices with associated Hermitian matrix 퐻푋, whose eigenvalues are odd integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푌 be the oriented graph with Hermitian matrix 퐻푌 = 퐼푛 ⊗ 퐻⃖⃗퐶4 + 퐻푋 ⊗ 퐽4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푌 admits multiple perfect state transfer on the set {4ℎ+1, 4ℎ+2, 4ℎ+3, 4ℎ+4}, for ℎ = 0, 1, … , 푛−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻푋 = ∑ 푟 휃푟퐸푟 be the spectral decomposition of 퐻푋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since 퐼푛 ⊗ 퐻⃖⃗퐶4 and 퐻푋 ⊗ 퐽4 commute, applying Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='6 gives 푒−i푡퐻푌 = ( 퐼푛 ⊗ 푒 −i푡퐻⃗퐶4 ) ( ∑ 푟 퐸푟 ⊗ 푒−i푡휃푟퐽4 ) = ∑ 푟 퐸푟 ⊗ 푒 −i푡 ( 퐻⃗퐶4 +휃푟퐽4 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For odd integer 휃푟, we have 푒 −i 휋 4 ( 퐻⃗퐶4 +휃푟퐽4 ) = ⎡ ⎢ ⎢ ⎢⎣ 0 −1 0 0 0 0 −1 0 0 0 0 −1 −1 0 0 0 ⎤ ⎥ ⎥ ⎥⎦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Hence 푒−i 휋 4 퐻푌 = 퐼푛 ⊗ ⎡ ⎢ ⎢ ⎢⎣ 0 −1 0 0 0 0 −1 0 0 0 0 −1 −1 0 0 0 ⎤ ⎥ ⎥ ⎥⎦ , and, for ℎ = 0, 1, … , 푛 − 1, the vertex 4ℎ + 1 has perfect state transfer to 4ℎ + 4, 4ℎ + 3 and 4ℎ + 2 at time 휋 4, 휋 2 and 3휋 4 , respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 8 If 푋 is obtained by orienting all edges in the (2푚 + 1)-cube from one bipartition to the other bipartition, then its associated matrix has the form 퐻푋 = [ 0 i퐵 −i퐵푇 0 ] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 퐻푋 has the same spectrum as the adjacency matrix of the (undirected) (2푚 + 1)-cube, which consists of only odd integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='7 gives an oriented graph admitting multiple perfect state transfer for integer 푚 ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' When 푚 = 0, then 푌 is the oriented graph given in [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 4 Perfect state transfer in Hermitian graphs We focus on Hermitian graphs with algebraic entries in the first part of this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In particular, we study the phase factors when perfect state transfer occurs in these graphs in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푋 is a Hermitian graph with algebraic entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 of [2] and Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2, if perfect state transfer from 푎 to 푏 occurs then the quantum walk on 푋 is periodic at both 푎 and 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 gives examples of Hermitian graphs (with transcendental entries) in which perfect state transfer occurs from 푎 to 푏 but 푎 and 푏 are not periodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 Phase factor We restrict our attention to Hermitian graphs with algebraic entries and extract information about the phase factor when perfect state transfer occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be an algebraic Hermitian matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Its characteristic polynomial has algebraic coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given spectral decomposition 퐻 = ∑ 푟 휃푟퐸푟, the eigenvalues 휃푟’s are algebraic so are the entries in 퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition 퐻 = ∑ 푟 휃푟퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If perfect state transfer occurs from 푎 to 푏 with phase factor 훼, then 훼 is algebraic if and only if 휃푟 휃푠 ∈ Q, for 휃푟, 휃푠 ∈ Φ푎 such that 휃푠 ≠ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose perfect state transfer occurs from 푎 to 푏 at time 휏 with algebraic phase factor 훼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Equation (2) that 푒−i휏휃푟 is algebraic, for 휃푟 ∈ Φ푎 = Φ푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying the Gelfond-Schneider Theorem to (푒−i휏휃푠) 휃푟 휃푠 = 푒−i휏휃푟, for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0, we conclude that 휃푟 휃푠 is rational.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Now suppose 휃푠 휃푟 ∈ Q for 휃푟, 휃푠 ∈ Φ푎 with 휃푠 ≠ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푞푟(푎, 푏) be the quarrels from 푎 to 푏 relative to 휃푟 ∈ Φ푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Equation (3) that 푒i푞푟(푎,푏) is algebraic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying Equation (4) yields 훼 ( 휃푟 휃푠 −1 ) = ( 푒i(푞푠(푎,푏)−휏휃푠)) 휃푟 휃푠 푒i(휏휃푟−푞푟(푎,푏)) = ( 푒i푞푠(푎,푏)) 휃푟 휃푠 푒−i푞푟(푎,푏).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The right-hand side is algebraic, so is 훼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 9 Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be an algebraic matrix associated with a Hermitian graph with spectral decomposition 퐻 = ∑ 푟 휃푟퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If there exist integers 푘푟’s satisfying ∑ 푟∈Φ푎 푘푟휃푟 = 0 and ∑ 푟∈Φ푎 푘푟 ≠ 0 then 훼 is algebraic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' From Equation (4), we have 훼 ∑ 푟∈Φ푎 푘푟 = 푒 −i휏 (∑ 푟∈Φ푎 푘푟휃푟 ) ∏ 푟∈Φ푎 (푒i푞푟(푎,푏))푘푟 = ∏ 푟∈Φ푎 (푒i푞푟(푎,푏))푘푟 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since the right-hand side is algebraic and ∑ 푟∈Φ푎 푘푟 ≠ 0, we conclude that 훼 is algebraic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We apply the theorem to algebraic Hermitian graphs where Φ푎 contains all eigenvalues of 퐻.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻 be an algebraic matrix associated with a Hermitian graph with zero diagonal entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose perfect state transfer occurs from 푎 to 푏 with phase factor 훼.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 푎 has full eigenvalue support then 훼 is algebraic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푘푟 be the multiplicity of 휃푟, for 휃푟 ∈ Φ푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since Φ푎 contains all eigenvalues of 퐻, we have ∑ 푟∈Φ푎 푘푟휃푟 = Tr(퐻) = 0 and ∑ 푟∈Φ푎 푘푟 equals the number of vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 that the phase factor at perfect state transfer is algebraic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given spectral decomposition of an algebraic Hermitian matrix 퐻 = ∑ 푟 휃푟퐸푟, if 퐸푟 has constant diagonal then every vertex has full eigenvalue support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In particular, Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3 applies to the adjacency matrix of a walk regular graph, an algebraic Hermitian matrix with zero diagonal that belongs to a Bose-Mesner algebra, and Hermitian circulants with algebraic entries and zero diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 One-way perfect state transfer We saw at the beginning of Section 4 that if perfect state transfer occurs from 푎 to 푏 in an algebraic Hermitian graph then both 푎 and 푏 are periodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In particular, there is perfect state transfer from 푏 back to 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We give a family of Hermitian graphs, with transcendental entries, that have perfect state transfer from 푎 to 푏 but not periodic at 푎 nor 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In particular, they do not have perfect state transfer from 푏 to 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' There exist infintely many Hermitian graphs which admit perfect state transfer from 푎 to 푏 but are not periodic at 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휆 be any real number such that 휆 ∉ Q휋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Define matrices 푃 = 1 2 ⎡ ⎢ ⎢ ⎢⎣ 1 1 1 1 1 1 −1 −1 1 −1 푒i휆 −푒i휆 1 −1 −푒i휆 푒i휆 ⎤ ⎥ ⎥ ⎥⎦ and 퐷 = ⎡ ⎢ ⎢ ⎢⎣ 0 0 0 0 0 휋 0 0 0 0 휆 0 0 0 0 휆 + 휋 ⎤ ⎥ ⎥ ⎥⎦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 10 Consider the Hermitian matrix 퐻 ∶= 푃 퐷푃 −1 = (휋 + 휆 2 ) 퐼4 − ⎡ ⎢ ⎢ ⎢ ⎢⎣ 0 휆 2 휋 4(1 + 푒−푖휆) 휋 4(1 − 푒−푖휆) 휆 2 0 휋 4(1 − 푒−푖휆) 휋 4(1 + 푒−푖휆) 휋 4 (1 + 푒푖휆) 휋 4(1 − 푒푖휆) 0 휆 2 휋 4 (1 − 푒푖휆) 휋 4(1 + 푒푖휆) 휆 2 0 ⎤ ⎥ ⎥ ⎥ ⎥⎦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 휃1 = 0, 휃2 = 휋, 휃3 = 휆 and 휃4 = 휆 + 휋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' All vertices have full eigenvalue support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Vertices 1 and 3 are strongly cospectral with quarrels: 푞1(3, 1) = 0, 푞2(3, 1) = 휋, 푞3(3, 1) = 휆, and 푞4(3, 1) = 휆 + 휋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1, we have perfect state transfer from vertex 3 to 1 at time 휏 = 1 with phase factor 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' As 휆 is not a rational multiple of 휋, we have 휃3 − 휃1 휃2 − 휃1 = 휆 휋 ∉ ℚ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2, 퐻 is not periodic at vertex 1 nor at vertex 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Consider the complex Hadamard matrix 푃 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 1 1 1 1 푖 푖 푖 푖 1 −1 푒푖휃 −푒푖휃 −1 1 −푒푖휃 푒푖휃 1 1 푒푖2휃 푒푖2휃 −푖 −푖 −푖푒푖2휃 −푖푒푖2휃 1 −1 푒푖3휃 −푒푖3휃 1 −1 푒푖3휃 −푒푖3휃 푖 푖 −푖 −푖 −1 −1 1 1 −푖 푖 푖푒푖휃 −푖푒푖휃 푖 −푖 −푖푒푖휃 푖푒푖휃 푖 푖 −푖푒푖2휃 −푖푒푖2휃 1 1 −푒푖2휃 −푒푖2휃 −푖 푖 푖푒푖3휃 −푖푒푖3휃 −푖 푖 푖푒푖3휃 −푖푒푖3휃 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ and diagonal matrix 퐷 = diag ( 0, 휋, 휃, 휃 + 휋, 휋 2, 3휋 2 , 휃 + 휋 2, 휃 + 3휋 2 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the Hermitian graph 푋 with matrix 퐻 = 푃 퐷푃 −1 admit perfect state transfer from vertex 1 to 2 at 푡 = 1, from vertex 1 to 3 at 푡 = 2, from vertex 1 to 4 at 푡 = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Each vertex has full eigenvalue support, and if 휃 ∉ Q휋, then the ratio condition is not satisfied and 푋 is not periodic at any vertex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 5 Multiple pretty good state transfer Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 shows that it is possible to have one-way perfect state transfer in Hermitian graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We now show that pretty good state transfer in Hermitian graphs goes both ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If a Hermitian graph admits pretty good state transfer from 푎 to 푏, then it has pretty good state transfer from 푏 to 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푈(푡) is the transition matrix of a Hermitian graph that has pretty good state transfer from 푎 to 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then, for 휀 > 0, there exists a time 휏1 such that 푈(휏1)푒푎 = 훾1푒푏 + 휌1, for some phase factor 훾1 and vector 휌1 with ‖휌1‖ < 휀 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' As 푈(푡) is almost periodic, there exists 휏2 > 휏1 such that 푈(휏2)푒푎 = 훾2푒푎 + 휌2, for some phase factor 훾2 and some vector 휌2 with ‖휌2‖ < 휀 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We have 푈(휏2 − 휏1)푒푏 = 훾1푈(휏2) ( 푒푎 − 푈(−휏1)휌1 ) = 훾1 ( 훾2푒푎 + 휌2 − 푈(휏2 − 휏1)휌1 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 11 Hence ‖푈(휏2 − 휏1)푒푏 − 훾1훾2푒푎‖ = ‖휌2 − 푈(휏2 − 휏1)휌1‖ ≤ ‖휌1‖ + ‖휌2‖ < 휀 and there is pretty good state transfer from 푏 to 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In [5], Zimborás et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' assign a complex weight 푒i훽 to an edge in the following graph and use the weight to control the fidelity at 푏 and 푐 with initial state 푒푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푒i훽 푎 푏 푐 This graph can be viewed as the rooted product of the weighted 퐾3 with a path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given a graph 푋 on 푛 vertices and a rooted graph 푌 with root 푎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The rooted product of 푋 and 푌 , 푋◦푌 , is obtained by taking 푛 isomorphic copies of 푌 and identifying the 푗-th vertex of 푋 with the root of the 푗-th copy of 푌 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In this section, we give two families of rooted products that have multiple pretty good state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 Oriented 3-cycle rooted with a star In [3], Fan and Godsil show that the double star, the rooted product of 퐾2 and 퐾1,푚, has pretty good state transfer between the two non-pendant vertices if and only if 4푚 + 1 is not a perfect square.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Note that 퐾2 is the only simple undirected graph with universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We extend their result to the rooted product of the oriented 3-cycle ⃖⃖⃗ 퐾3 with ̂ 퐾1,푚, where ̂ 퐾1,푚 denotes the star 퐾1,푚 with the non-pendant vertex being its root.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푐 푎 푏 Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푎 and 푏 are strongly cospectral vertices in the Hermitian graph 푋 on 푛 ≥ 2 vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then they are strongly cospectral in the rooted product 푋◦ ̂ 퐾1,푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐻푋 be the Hermitian matrix associated with 푋 with spectral decomposition 퐻푋 = ∑푑 푟=1 휃푟퐸푟 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the matrix associated with the rooted product 푌 = 푋◦ ̂ 퐾1,푚 is 퐻푌 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 1 0 0 ⋯ 0 0 0 0 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ 0 0 0 0 ⋯ 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⊗ 퐻푋 + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 0 1 1 ⋯ 1 1 0 0 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 1 0 0 ⋯ 0 1 0 0 ⋯ 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⊗ 퐼푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 12 For 푟 = 1, … , 푑, define 휆± 푟 = 휃푟 ± √ 휃2 푟 + 4푚 2 , and 퐹 ± 푟 = 1 (휆± 푟 )2 + 푚 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ (휆± 푟 )2 휆± 푟 휆± 푟 ⋯ 휆± 푟 휆± 푟 1 1 ⋯ 1 ⋮ ⋮ ⋮ ⋱ ⋮ 휆± 푟 1 1 ⋯ 1 휆± 푟 1 1 ⋯ 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⊗ 퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Define 퐹0 = ⎡ ⎢ ⎢⎣ 0 ퟎ푚 ퟎ푚 퐼푚 − 1 푚퐽푚 ⎤ ⎥ ⎥⎦ ⊗ 퐼푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 퐻푌 has spectral decomposition 퐻푌 = 0 ⋅ 퐹0 + 푑 ∑ 푟=1 (휆+ 푟 ⋅ 퐹 + 푟 + 휆− 푟 ⋅ 퐹 − 푟 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (10) Note that the (1, 1)-block are indexed by the vertices in 푋 and the eigenvalue 0 is not in the support of 푎 nor 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The result follows from the (1, 1)-block of 퐹 + 푟 and 퐹 − 푟 being non-zero scalar multiple of 퐸푟.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푋 is a Hermitian graph with universal perfect state transfer with spectrum Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푆 be the set of non-pendant vertices in 푋◦ ̂ 퐾1,푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let Ψ = { 휃 ± √ 휃2 + 4푚 2 ||| 휃 ∈ Φ } .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If Ψ is linearly independent over Q, then 푋◦ ̂ 퐾1,푚 has multiple pretty good state transfer on 푆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푎, 푏 ∈ 푆, there is perfect state transfer between 푎 and 푏 in 푋, so 푎 and 푏 are strongly cospectral in 푋◦ ̂ 퐾1,푚 by Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We see in Equation (10) that Ψ is the eigenvalue support of 푎 in the rooted product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 that pretty good state transfer occurs between 푎 and 푏 in 푋◦ ̂ 퐾1,푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In the following result, we focus on 푋 = ⃖⃖⃗ 퐾3 which has spectral decomposition ⎡ ⎢ ⎢⎣ 0 −i i i 0 −i −i i 0 ⎤ ⎥ ⎥⎦ = 0 ⋅ 1 3퐽3 + √ 3 ⋅ 1 3 ⎡ ⎢ ⎢⎣ 1 푒−2휋i∕3 푒2휋i∕3 푒2휋i∕3 1 푒−2휋i∕3 푒−2휋i∕3 푒2휋i∕3 1 ⎤ ⎥ ⎥⎦ − √ 3 ⋅ 1 3 ⎡ ⎢ ⎢⎣ 1 푒2휋i∕3 푒−2휋i∕3 푒−2휋i∕3 1 푒2휋i∕3 푒2휋i∕3 푒−2휋i∕3 1 ⎤ ⎥ ⎥⎦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Hence any two vertices in ⃖⃖⃗ 퐾3 are strongly cospectral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푉 (⃖⃖⃗ 퐾3) = {푎, 푏, 푐}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the eigenvalue support of 푎 in ⃖⃖⃗ 퐾3◦ ̂ 퐾1,푚 are 휆1 = √ 푚, 휆2 = − √ 푚, 휆3 = √ 3 + √ 3 + 4푚 2 , 휆4 = √ 3 − √ 3 + 4푚 2 , 휆5 = − √ 3 + √ 3 + 4푚 2 and 휆6 = − √ 3 − √ 3 + 4푚 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 13 From Equation (10), the quarrels in ⃖⃖⃗ 퐾3◦ ̂ 퐾1,푚 are 푞푟(푎, 푏) = ⎧ ⎪ ⎨ ⎪⎩ 0 if 푟 = 1, 2, 2휋 3 if 푟 = 3, 4, and −2휋 3 if 푟 = 5, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The rooted product ⃖⃖⃗ 퐾3◦ ̂ 퐾1,푚 admits multiple pretty good state transfer on the set {푎, 푏, 푐} of non-pendant vertices if and only if one of the following holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' gcd(3, 푚) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푚 = 3푠, for some integer 푠 such that neither 푠 nor 4푠 + 1 are perfect square.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푚 = 27푘2, for some integer 푘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 푚 = 27푘2 + 27푘 + 6, for some integer 푘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since ⃖⃖⃗ 퐾3◦ ̂ 퐾1,푚 has an automorphism that maps 푎 to 푏, 푏 to 푐 and 푐 to 푎, it is sufficient to prove that there is pretty good state transfer from 푎 to 푏 in the rooted product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2, Condition (i) of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For Condition (ii) of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4, we consider integers 푙1, … , 푙6 satisfying 6 ∑ 푟=1 푙푟휆푟 = ( 푙1 − 푙2 ) √ 푚 + (푙3 + 푙4 − 푙5 − 푙6 2 ) √ 3 + (푙3 − 푙4 + 푙5 − 푙6 2 ) √ 3 + 4푚 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (11) Case 1: If gcd(3, 푚) = 1 then the set { √ 3, √ 푚, √ 3 + 4푚} is linearly independent over Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Equation (11) implies (푙3 + 푙4 − 푙5 − 푙6)∕2 = 0 and 6 ∑ 푟=1 푙푟푞푟(푎, 푏) = ( 푙3 + 푙4 − 푙5 − 푙6 ) 2휋 3 = 0 (mod 2휋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (12) Condition (ii) of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 holds with 훿 = 0, so there is pretty good state transfer from 푎 to 푏 in ⃖⃖⃗ 퐾3◦ ̂ 퐾1,푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Case 2: When 푚 = 3푠, Equation (11) becomes ( 푙1 − 푙2 ) √ 푠 + (푙3 + 푙4 − 푙5 − 푙6 2 ) + (푙3 − 푙4 + 푙5 − 푙6 2 ) √ 1 + 4푠 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 푠 and 4푠 + 1 are not perfect squares then {1, √ 푠, √ 1 + 4푠} is linearly independent over Q and Equation (11) implies Equation (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Hence there is pretty good state transfer from 푎 to 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Case 3: Suppose 푚 = 3ℎ2, for some integer ℎ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 4ℎ2 + 1 is not a perfect square, and Equation (11) becomes (2ℎ(푙1 − 푙2) + 푙3 + 푙4 − 푙5 − 푙6 2 ) + (푙3 − 푙4 + 푙5 − 푙6 2 ) √ 4ℎ2 + 1 = 0, 14 which implies 푙3 + 푙4 − 푙5 − 푙6 = −2ℎ(푙1 − 푙2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If ℎ = 3푘, for some integer 푘, then Equation (12) holds and pretty good state transfer occurs from 푎 to 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose ℎ is not divisible by 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0 and 푙3 = 푙6 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since 6 ∑ 푟=1 푙푟 (푞푟(푎, 푏) + 훿) = 2훿, Equation (7) holds if and only if 훿 ∈ Z휋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Equation (11) also holds when 푙1 = 1, 푙2 = 푙3 = 푙4 = 0, 푙5 = 푙6 = ℎ, but 6 ∑ 푟=1 푙푟(푞푟(푎, 푏) + 훿) = −4ℎ휋 3 + (2ℎ + 1)훿 ≠ 0 (mod 2휋) when 훿 ∈ Z휋.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We conclude that pretty good state transfer from 푎 to 푏 does not occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Case 4: Suppose 푚 = 3푠 with 4푠 + 1 = ℎ2, for some integer ℎ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푠 is not a perfect square, and Equa- tion (11) becomes (푙1 − 푙2) √ 푠 + (푙3 + 푙4 − 푙5 − 푙6) + ℎ(푙3 − 푙4 + 푙5 − 푙6) 2 = 0, which implies 푙3 + 푙4 − 푙5 − 푙6 = −ℎ(푙3 − 푙4 + 푙5 − 푙6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If ℎ is divisible by 3 then Equation (12) holds and pretty good state transfer occurs from 푎 to 푏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' In this case, 푚 = 27푘2 + 27푘 + 6 if we write 4푠 + 1 = 32(2푘 + 1)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If ℎ is not divisible by 3, Equation (11) holds when 푙1 = 푙2 = 푙4 = 푙5 = 0, 푙3 = 푙6 = 1 and when 푙1 = 푙2 = 0, 푙3 = 푙4 = ℎ, 푙5 = −1 and 푙6 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Using the same argument as in the previous case, we see that there does not exist 훿 satisfying Equation (7) for both assignments for the 푙푗’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We conclude that pretty good state transfer from 푎 to 푏 does not occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 Circulants rooted with a looped path In [4], Kempton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' show that a path with a loop on each end-vertex with transcendental weight 훾 has pretty good state transfer between the two end-vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We use 푃 훾 푚 to denote the rooted path on vertices {1, 2, … , 푚} that has root 푚 and a loop on vertex 1 with weight 훾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the path of length 2푚 − 1 with a loop of weight 훾 on each end-vertex studied in [4] can be viewed as the rooted product of 퐾2 with 푃 훾 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Path 푃 훾 푚 rooted at 푚 with a loop at 1 1 2 푚 훾 15 We extend their result to the rooted product 푋◦푃 훾 푚 where 푋 is Hermitian circulant with rational eigen- values that admits universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Orthogonal polynomials and field trace are the main tools used in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Please see Chapter 8 of [17] for the background of orthogonal polynomials, and see [4] and Chapter 14 of [18] for some basic facts on field trace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Suppose 푉 (푋) = {푥0, 푥1, … , 푥푛−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then we label the vertices of 푋◦푃 훾 푚 with the ordered pair (푥ℎ, 푗) denoting the 푗-th vertex on 푃 훾 푚 that is rooted at 푥ℎ in 푋, for ℎ = 0, 1, … , 푛 − 1 and 푗 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (푥0, 푚) (푥1, 푚) (푥2, 푚) (푥0, 1) (푥0, 2) 훾 (푥1, 1) (푥1, 2) 훾 (푥2, 1) (푥2, 2) 훾 The rooted product of ⃖⃖⃗ 퐾3 with 푃 훾 푚 Let 퐻푋 be the matrix of the Hermitian circulant 푋 with universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Theorem 8 of [1] that the eigenvalues of 퐻푋 are simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given distinct eigenvalues 휃0, 휃1, … , 휃푛−1 of 퐻푋 and the discrete Fourier matrix of order 푛 퐹푛 = 1 √ 푛 ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 1 1 1 ⋯ 1 1 휁 휁2 ⋯ 휁푛−1 1 휁2 휁4 ⋯ 휁2(푛−1) ⋮ ⋮ ⋮ ⋱ ⋮ 1 휁푛−1 휁2(푛−1) ⋯ 휁(푛−1)2 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ where 휁 = 푒2휋i∕푛, we can write 퐻푋 = 퐹푛 ⎡ ⎢ ⎢ ⎢⎣ 휃0 0 ⋯ 0 0 휃1 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ 휃푛−1 ⎤ ⎥ ⎥ ⎥⎦ 퐹 ∗ 푛 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 0 ≤ 푎, 푏 ≤ 푛 − 1, the vertices 푥푎 and 푥푏 are strongly cospectral with quarrel 푞푗(푥푎, 푥푏) = 2휋푗(푏 − 푎) 푛 , (13) for 푗 = 0, 1, … , 푛 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 22 of [1] gives the following characterization of Hermitian circulants that have universal perfect state transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be a Hermitian circulant on 푛 vertices with simple eigenvalues 휃0, … , 휃푛−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 푋 has universal perfect state transfer if and only if there exist 훼, 훽 ∈ R with 훽 > 0, 푐0, … , 푐푛−1 ∈ Z and integer ℎ coprime with 푛 such that 휃푗 = 훼 + 훽 (푗ℎ + 푐푗푛) , for 푗 = 0, … , 푛 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 16 To determine the spectrum of 푍 = 푋◦푃 훾 푚, we consider the 푚 × 푚 Jacobi matrices 푇푗 ∶= ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 훾 1 0 ⋯ 0 0 1 0 1 ⋯ 0 0 0 1 0 ⋯ 0 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 0 ⋯ 0 1 0 0 0 ⋯ 1 휃푗 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ , for 푗 = 0, 1, … , 푛 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (14) Let 휑푗,0 = 1 and let 휑푗,푟(푡) be the characteristic polynomial of the 푟-th leading principal submatrix of 푇푗, for 푟 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then 휑푗,0(푡), 휑푗,1(푡), … , 휑푗,푚(푡) is a sequence of orthogonal polynomials satisfying 휑푗,0(푡) = 1, 휑푗,1(푡) = 푡 − 훾, 휑푗,푟(푡) = 푡 휑푗,푟−1(푡) − 휑푗,푟−2(푡) (15) for 푟 = 2, … , 푚 − 1, and 휑푗,푚(푡) = (푡 − 휃푗 ) 휑푗,푚−1(푡) − 휑푗,푚−2(푡).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (16) From Lemma 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='2 of [17], the roots 휆푗,1, … , 휆푗,푚 of 휑푗,푚(푡) = 0 are the eigenvalues of 푇푗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Further, Φ푗,푠 = [1 휑푗,1(휆푗,푠) … 휑푗,푚−1(휆푗,푠)]푇 is an eigenvector of 푇푗 corresponding to eigenvalue 휆푗,푠, for 푠 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Lemma 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='1 of [17] that the eigenvalues of 푇푗 are simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It is also known that consecutive orthogonal polynomials do not have non-trivial common factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The Hermitian matrix of 푍 is 퐻푍 = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 0 0 ⋯ 0 0 0 0 ⋯ 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ 0 0 0 0 ⋯ 0 1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⊗ 퐻푋 + ⎡ ⎢ ⎢ ⎢ ⎢ ⎢⎣ 훾 1 ⋯ 0 0 1 0 ⋯ 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ 0 1 0 0 ⋯ 1 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥⎦ ⊗ 퐼푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (17) Since 퐻푋퐹푛푒푗 = 휃푗퐹푛푒푗, we have 퐻푍 (Φ푗,푠 ⊗ 퐹푛푒푗 ) = 휆푗,푠 (Φ푗,푠 ⊗ 퐹푛푒푗 ) (18) for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be a Hermitian circulant with distinct eigenvalues 휃0, 휃1, … , 휃푛 and let 퐹푛, 휆푗,푠, and Φ푗,푠 be defined as above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚, 휆푗,푠 is a simple eigenvalue of the Hermitian graph 푍 defined in Equation (17), with spectral decomposition 퐻푍 = 푛−1 ∑ 푗=0 푚 ∑ 푠=1 휆푗,푠 1 ‖Φ푗,푠‖2 ( Φ푗,푠Φ∗ 푗,푠 ) ⊗ ( (퐹푛푒푗)(퐹푛푒푗)∗) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푥푎, 푥푏 ∈ 푉 (푋) and ℎ = 1, … , 푚, the vertices (푥푎, ℎ) and (푥푏, ℎ) are strongly cospectral in 푍 with quarrel corresponding to eigenvalues 휆푗,푠 being 푞푗,푠 ( (푥푎, ℎ), (푥푏, ℎ) ) = 2휋푗(푏 − 푎) 푛 , for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 17 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It is sufficient to show that the eigenvalues 휆푗,푠 of 푍, for 푗 = 0, … , 푛−1 and 푠 = 1, … , 푚, are distinct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Supoose 휆푗1,푠1 = 휆푗2,푠2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' From Equation (15), we have 휑푗1,푟 (휆푗1,푠1 ) = 휑푗2,푟 (휆푗2,푠2 ) , for 푟 = 1, … , 푚 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' From Equation (16), 휑푗1,푚 (휆푗1,푠1 ) = 휑푗2,푚 (휆푗2,푠2 ) = 0 implies 휃푗1 = 휃푗2 and 푗1 = 푗2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since 휑푗1,푚(푡) = 0 has 푚 distinct roots, we conclude that 푠1 = 푠2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' We get the quarrels of 푍 directly from Equations (18) and (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For the rest of this section, we assume that 훾 is transcendental and 휃0, 휃1, … , 휃푛−1 ∈ Q as in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying Laplace expansion along the first two rows of 푇푗 in Equation (14) gives 휑푗,푚(푡) = (푡 − 훾)푔푛−1(푡) − 푔푛−2(푡), where 푔푛−1(푡) is the characteristic polynomial of the (푛 − 1) × (푛 − 1) Jacobi matrix ⎛ ⎜ ⎜ ⎜ ⎜ ⎜⎝ 휃푗 1 ⋯ 0 0 1 0 ⋯ 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 ⋯ 0 1 0 0 ⋯ 1 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟⎠ , and 푔푛−2(푡) is the characteristic polynomial of its (푛 − 2)-th leading principal submatrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Now 푔푛−1(푡) and 푔푛−2(푡) are consecutive orthogonal polynomials, so they do not have any common factor of positive degree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since 푔푛−1(푡) and 푔푛−2(푡) are rational polynomials and 훾 is transcendental, we conclude that 휑푗,푚(푡) is irre- ducible over Q(훾).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Then the splitting field 퐹푗 of 휑푗,푚(푡) is a Galois extension over Q(훾).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Given a Galois extension 퐸∕퐾, we use Tr퐸∕퐾(휇) to denote the trace of 휇 from 퐸 to 퐾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Here are some properties of the trace map useful for the proof of Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 퐸∕퐾 be a Galois extension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The following properties hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 휇 ∈ 퐸, Tr퐸∕퐾(휇) ∈ 퐾.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 휇 ∈ 퐾, Tr퐸∕퐾(휇) = [퐸 ∶ 퐾]휇.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' iii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 휇1, 휇2 ∈ 퐸, Tr퐸∕퐾(휇1 + 휇2) = Tr퐸∕퐾(휇1) + Tr퐸∕퐾(휇2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' iv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 퐾 ⊂ 퐹 ⊂ 퐸 are extension fields, then Tr퐸∕퐾(휇) = Tr퐹∕퐾 ( Tr퐸∕퐹 (휇) ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If the minimal polynomial of 휇 ∈ 퐸 over 퐾 is 푡푚 + 푎푚−1푡푚−1 + ⋯ + 푐0 then Tr퐸∕퐾(휇) = −[퐸 ∶ 퐾] 푚 푎푚−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' The eigenvalue 휆푗,푠 of 푋◦푃 훾 푚 has minimal polynomial 휑푗,푚(푡) over Q(훾).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying Property (v) to 휆푗,푠 ∈ 퐹푗, Equation (16) gives Tr퐹푗∕Q(훾)(휆푗,푠) = [퐹푗 ∶ Q(훾)] 푚 ( 훾 + 휃푗 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (19) Consider the smallest extension field 푀 of 퐹푗 that contains 퐹0, … , 퐹푛−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For 푗 = 0, … , 푛 − 1, 푀∕퐹푗 is a Galois extension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It follows from Properties (ii) and (iv) and Equation (19) that Tr푀∕Q(훾)(휆푗,푠) = Tr퐹푗∕Q(훾) ( [푀 ∶ 퐹푗]휆푗,푠 ) = [푀 ∶ 퐹푗] [퐹푗 ∶ Q(훾)] 푚 ( 훾 + 휃푗 ) = [푀 ∶ Q(훾)] 푚 ( 훾 + 휃푗 ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (20) 18 Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Let 푋 be a Hermitian circulant on 푛 vertices that admits universal perfect state transfer with eigenvalues given in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If 휃0, … , 휃푛−1 ∈ Q and 훾 is transcendental then, for any positive integer 푚, the rooted product 푋◦푃 훾 푚 has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)}, for 1 ≤ ℎ ≤ 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' For ℎ = 1, … , 푚, 푋◦푃 훾 푚 has an automorhism that maps (푥푎, ℎ) to (푥푎+1, ℎ), for 푎 ∈ Z푛.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' It is sufficient to show that there is pretty good state transfer from (푥0, ℎ) to (푥1, ℎ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='6, (푥0, ℎ) and (푥1, ℎ) are strongly cospectral with quarrels 푞푗,푠 ( (푥0, ℎ), (푥1, ℎ) ) = 2휋푗 푛 , for 푗 = 0, … , 푛 − 1 and 푠 = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' To show the Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4 (ii) holds, consider integers 푙푗,푠’s satisfying 푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠휆푗,푠 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (21) We apply the trace from 푀 to Q(훾) to both sides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Applying Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='7 (iii) and Equation (20), Equa- tion (21) implies 푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠(훾 + 휃푗) = 훾 (푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠 ) + 푛−1 ∑ 푗=0 휃푗 ( 푚 ∑ 푠=1 푙푗,푠 ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since 훾 is transcendental and ∑ 푗 휃푗 (∑ 푠 푙푗,푠 ) ∈ Q, Equation (21) is equivalent to 푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠 = 0 (22) and 푛−1 ∑ 푗=0 휃푗 ( 푚 ∑ 푠=1 푙푗,푠 ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' (23) Recall 휃푗 = 훼 + 훽(푗ℎ + 푐푗푛) where gcd(ℎ, 푛) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Equations (22) and (23) imply 푛−1 ∑ 푗=0 (푗ℎ + 푐푗푛) ( 푚 ∑ 푠=1 푙푗,푠 ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Since gcd(ℎ, 푛) = 1, we have 푛−1 ∑ 푗=0 푗 푚 ∑ 푠=1 푙푗,푠 = 0 (mod 푛).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If Equations (22) and (23) hold then, for any 훿 ∈ R, 푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠 ( 푞푗,푠 ( (푥0, ℎ), (푥1, ℎ) ) + 훿 ) = 2휋 푛 (푛−1 ∑ 푗=0 푗 푚 ∑ 푠=1 푙푗,푠 ) + 훿 (푛−1 ∑ 푗=0 푚 ∑ 푠=1 푙푗,푠 ) = 0 (mod 2휋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' By Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='4, pretty good state transfer occurs from (푥0, ℎ) to (푥1, ℎ), for ℎ = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 19 Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Putting a transcendental weight 훾 on the loops is sufficient for 휑0,푚(푡), … , 휑푛−1,푚(푡) to be irreducible over Q(훾).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='8 holds for irrational number 훾 as long as 휑0,푚(푡), … , 휑푛−1,푚(푡) are irreducible over Q(훾).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' If we move the loops from the (푥푎, 1) to (푥푎, 푚), for 푎 = 0, … , 푛−1, then a similar argument shows that the resulting graph has multiple pretty good state transfer on the set {(푥0, ℎ), (푥1, ℎ), … , (푥푛−1, ℎ)}, for ℎ = 1, … , 푚.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Acknowledgements This project was completed under the 2021 Fields Undergraduate Summer Research Program which provided support for A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Acuaviva, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Eldridge, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' How and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Wright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil gratefully acknowledges the support of the Natural Sciences and Engineering Council of Canada (NSERC) Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' RGPIN-9439.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Chan is grateful for the support of the NSERC Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' RGPIN-2021-03609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' References [1] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Cameron, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Fehrenbach, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Granger, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Shrestha, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Tamon, “Universal state transfer on graphs,” Linear Algebra and Its Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 455, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 115–142, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [2] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil, “Real state transfer,” arXiv1710:04042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [3] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Fan and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil, “Pretty good state transfer on double stars,” Linear Algebra and Its Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 438, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2346–2358, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [4] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Kempton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lippner, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Yau, “Pretty good quantum state transfer in symmetric spin net- works via magnetic field,” Quantum Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 9, Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 210, 23, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [5] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Zimborás, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Faccin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Kádár, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Whitfield, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lanyon, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Biamonte, “Quantum transport enchancement by time-reversal symmetry breaking,” Scientific Reports, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 3, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 2361, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Childs, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Cleve, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Deotto, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Farhi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Gutmann, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Spielman, “Exponential algo- rithmic speedup by a quantum walk,” Proceedings of the thirty-fifth ACM symposium on Theory of computing, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Childs, “Universal computation by quantum walk,” Physical Review Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 102, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 180 501, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [8] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Bose, “Quantum communication through an unmodulated spin chain,” Physical Review Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 91, 20 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [9] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Kay, “Perfect, efficient state transfer and its application as a constructive tool,” International Jour- nal of Quantum Information, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 641–676, 4 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [10] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Connelly, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Grammel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Kraut, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Serazo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Tamon, “Universality in perfect state transfer,” Linear Algebra and Its Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 531, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 516–532, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [11] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Lato, “Perfect state transfer on oriented graphs,” Linear Algebra and its Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 604, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 278–292, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 20 [12] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' McKay, “A new graph product and its spectrum,” Bulletin of The Australian Math- ematical Society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 18, Feb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [13] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil, “State Transfer on Graphs,” Discrete Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 312, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 123–147, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [14] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Levitan and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Zhikov, Almost periodic functions and differential equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Cambridge University Press, Cambridge-New York, 1982, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' xi+211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' van Bommel, “quantum walks and pretty good state transfer on paths,” Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' dissertation, Univer- sity of Waterloo, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [16] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Coutinho and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil, “graph spectra and continuous quantum walks,” preprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [17] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Godsil, Algebraic Combinatorics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' New York: Chapman & Hall, 1993, Chapman and Hall Mathe- matics Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' [18] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Dummit and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' Foote, Abstract algebra, Third.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' John Wiley & Sons, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=', Hoboken, NJ, 2004, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' xii+932.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} +page_content=' 21' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AzT4oBgHgl3EQfgvxw/content/2301.01473v1.pdf'} diff --git a/8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf b/8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c5fe1d4fc8c2cc10a82dddd10e0c7d989ad93929 --- /dev/null +++ b/8NAzT4oBgHgl3EQf-v4l/content/2301.01937v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e566ab143524b6e852c709b1b1875bc9823e4ff9d926b44d2c35646d47e8233 +size 12166408 diff --git a/8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf b/8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e854eaa5e4d66bc7413f8843eb53c3434a4d8d6d --- /dev/null +++ b/8tE3T4oBgHgl3EQfSAk3/content/2301.04427v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93fdcceffe700ae3e0901e24d2d3afc6a053022a43952c9818ada1f97e67e895 +size 2007897 diff --git a/8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss b/8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..14d088547dbfadfa65c2ff9c92b8db04ff747ca0 --- /dev/null +++ b/8tE3T4oBgHgl3EQfSAk3/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8bc1add8492d6de905caa4dd87976fb01ae64e4959d5bd80c27e67ece7e6d2ad +size 3473453 diff --git a/8tE3T4oBgHgl3EQfSAk3/vector_store/index.pkl b/8tE3T4oBgHgl3EQfSAk3/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..e9b27cca7f25c72c1bd8fc4fefeac3aadc5682a5 --- /dev/null +++ b/8tE3T4oBgHgl3EQfSAk3/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93419120599980c1a8d3548c7e12e3ec3082ff02c631e6f544939663747d6627 +size 120912 diff --git a/9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf b/9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..31d625acef5d54e49a3caea2efc309727486ace3 --- /dev/null +++ b/9NFLT4oBgHgl3EQfty_-/content/2301.12153v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aea66c97bdba8fd24d66035ada94474f20bf69877ee7f8c83e4d5c5c95dfb293 +size 924256 diff --git a/9dE1T4oBgHgl3EQf8AVQ/vector_store/index.pkl b/9dE1T4oBgHgl3EQf8AVQ/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..9bd52f3f1b1fbece8393c8ba34d3c1cb8cd86a88 --- /dev/null +++ b/9dE1T4oBgHgl3EQf8AVQ/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:776bf37ef8a77207d89fbe86d873a36296d4b67a824818834ce13ee21809897a +size 113244 diff --git a/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/2301.04005v1.pdf.txt b/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/2301.04005v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..80cc6884e2cee78bd25b641e07fb022501b6a409 --- /dev/null +++ b/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/2301.04005v1.pdf.txt @@ -0,0 +1,488 @@ +Towards AI-controlled FES-restoration of arm +movements: Controlling for progressive muscular +fatigue with Gaussian state-space models +Nat Wannawas +Dept. of Bioengineering, Imperial College London +London, UK +nat.wannawas18@imperial.ac.uk +A. Aldo Faisal +Dept. of Bioengineering & Dept. of Computing, +Imperial College London, London, UK +Chair of Digital Health & Data Science, University of Bayreuth +Bayreuth, Germany +aldo.faisal@imperial.ac.uk +Abstract—Reaching disability limits an individual’s ability in +performing daily tasks. Surface Functional Electrical Stimulation +(FES) offers a non-invasive solution to restore lost ability. +However, inducing desired movements using FES is still an +open engineering problem. This problem is accentuated by the +complexities of human arms’ neuromechanics and the variations +across individuals. Reinforcement Learning (RL) emerges as +a promising approach to govern customised control rules for +different settings. Yet, one remaining challenge of controlling FES +systems for RL is unobservable muscle fatigue that progressively +changes as an unknown function of the stimulation, thereby +breaking the Markovian assumption of RL. +In this work, we present a method to address the unobservable +muscle fatigue issue, allowing our RL controller to achieve higher +control performances. Our method is based on a Gaussian State- +Space Model (GSSM) that utilizes recurrent neural networks +to learn Markovian state-spaces from partial observations. The +GSSM is used as a filter that converts the observations into +the state-space representation for RL to preserve the Markovian +assumption. Here, we start with presenting the modification of +the original GSSM to address an overconfident issue. We then +present the interaction between RL and the modified GSSM, +followed by the setup for FES control learning. We test our RL- +GSSM system on a planar reaching setting in simulation using +a detailed neuromechanical model. The results show that the +GSSM can help improve the RL’s control performance to the +comparable level of the ideal case that the fatigue is observable. +Index Terms—Functional Electrical Stimulation, FES, Gaus- +sian State-Space Model, Reinforcement Learning, Arm Motions +I. INTRODUCTION +Yearly, strokes and spinal cord injuries have left individuals +around the world with paralysis. Upper body paralysis, one +of the most commonly found following incidents, causes the +dysfunction of arm movements and severely affect the individ- +uals’ abilities in performing daily tasks. Functional Electrical +Stimulation (FES), a technique that uses electrical signals to +induce muscle contraction, offers a solution for restoring the +movements. Yet, controlling FES to induce desired movements +We acknowledge funding from the Royal Thai Government Scholarship to +NW and a UKRI Turing AI Fellowship to AAF. +is challenging. One challenge is that each individual requires +customised stimulation to induce a certain movement. This +causes difficulties in designing a control method that works +across different individuals without intensive, manual config- +urations. Another challenge is that the muscle’s responses to +the FES change over time because of muscular fatigue. Since +the fatigue level can not be directly measured, it is difficult for +a controller to maintain its performance over extended periods. +Several methods that can automatically find customised +stimulation have been investigated. One of those is Reinforce- +ment Learning (RL), a machine learning algorithm with a +learning agent (RL agent) that learns to control an environment +through interaction. The successes of RL in controlling body +movements have been presented in several scenarios: cycling +[1], walking [2], arm movements [3]–[7]. Additionally, [7] +shows that RL can deal with fatigue to a certain degree; yet, +the performance drop is still inevitable in many cases. +Different approaches have been employed to deal with +muscular fatigue. A widely used approach is to record elec- +tromyogram (EMG) or mechanomyogram (MMG) from which +muscle force can be estimated [8]–[11]. Although this ap- +proach could be straightforward, the successes are, currently, +limited to a few types of movements such as knee extension +[10], [11] and cycling [8], [9]. Additionally, it requires sensors +which can be difficult to set up. Approaches that exploit +the periodic nature of the movements such as walking are +used in [12], [13]. However, these may not be suitable to be +used in controlling arbitrary arm movements. Our previous +work [6] explores an approach that does not use dedicated +sensors and can be applied to arbitrary movements. The +approach uses a recurrent neural network (RNN) to encode +the history of observations and provide additional information +to the RL agent. This strategy can control arbitrary single- +joint movements in the real world. However, its capability in +multiple-joint cases is limited. +In this work, we present an AI-based system for con- +trolling FES that can induce arbitrary desired movements +and can maintain performance under progressive muscular +fatigue. Our system uses the combination of an RNN-based +arXiv:2301.04005v1 [eess.SY] 10 Jan 2023 + +Gaussian state-space model (GSSM) that learns Markovian +state-representations and RL that learns the control policies +on the representation spaces. In simple terms, the GSSM +here functions as a filter that provides insight information +of the systems’ states to the RL agents, allowing the agents +to select better actions. Compared to our previous work [6], +this system is more powerful and capable of leaning the +complex dynamics of multiple-joint movements. Additionally, +it produces probabilistic transition functions that can be useful, +for example, for model-based RL. +We present the details of our RL-GSSM system and the +setup for controlling arbitrary movements in the Methods +section. We also provide the modification of the original +GSSM [14] to address an overconfident issue. We demonstrate +our system in a planar arbitrary reaching setting using a +detailed neuromechanical model and show that our system can +achieve and maintain the control performance at the same level +as the ideal case in which muscle fatigue is observable. +II. METHODS +A. Gaussian State-Space Models (GSSM) +Here, GSSM functions as a filter that converts an observable +environment’s state vector (ot) into a state-representation vec- +tor (xt) which contains the information of the system’s hidden +states. Our GSSM is based on [14] whose main components +are an RNN-based filter (fF ilter) and a transition function +(fT ran). The filter converts ot into xt through a process +described as follows. The process starts at the zeroth time +step (t = 0) with the initialisation of the RNN’s hidden states +(h0) and state representations (x0). x0 is then concatenated +with the initial action vector ainit and is passed through Ws, +a small multilayer perceptron (MLP). This step is mathemati- +cally expressed as hx,t=0 = Ws([x0; a0]T ). Meanwhile, the +RNN observes the environment’s states o0 and updates its +hidden state to ht=1. hx,t=0 and ht=1 are then combined +as hc,t=1 = +1 +2 tanh(hx,t=0 + ht=1). Next, hc,t=1 is passed +through Wx which is an MLP that outputs the distribution of +xt. The following time steps repeat this process but start with +the sampled xt and actual actions at. The trajectory of xt, +denoted as x0:T , is obtained by repeating this process through +the whole trajectory of observations o0:T . For future notation, +RNN, Wh, and Wx are referred collectively as fF ilter. +The GSSM is trained using the trajectory of observations +(o0:T ) as follows. The training process starts with using fF ilter +to sample x0:T corresponding to o0:T . Next, we reconstruct the +observations by passing the sampled x0:T through the obser- +vation mapping function Wg, expressed as k0:T = Wg(x0:T ). +The parameters of fF ilter are optimised through gradient +descent to minimise the following loss functions. The first loss +function is the likelihood between k0:T and o0:T , expressed +as llik = �T +t=1 p(ot|µk,t, Σk,t), where µk,t and Σk,t are +the mean and covariance of the reconstructed observations, +respectively. The second loss function is the KL divergence +between the x0:T distribution sampled by fF ilter and those +predicted by fT ran, expressed as +lDKL = +T +� +t=2 +DKL[fF ilter(xt−1, o0:t)||fT ran(xt−1)]. +Intuitively, this loss function encourages the filter-generated +distribution of xt, pf(xt), to have a Markovian structure, +i.e, pf(xt|xt−1, o0:t) = p(xt|xt−1). Note that the observation +history o0:t−1 is encoded in the RNN’s hidden states. +In the original model [14], fT ran is represented by a neural +network that directly outputs the means and variances of xt. +This network produces overconfidence in the learned transition +function. To mitigate this issue, we replace that network +with the ensemble of neural networks with randomised prior +functions (RP-Ensemble) [15]. The predictive means and vari- +ances are computed by fitting Gaussian distributions to the +ensemble’s outputs. +B. Generic RL-GSSM for controlling arbitrary movements +Reinforcement Learning (RL) learns a task through reward +signals collected from interactions with an environment. The +interactions occur in a discrete-time fashion, starting with +the agent observing the environment’s state st and selecting +an action at based on its policy π. The action causes the +environment to be in a new state st+1. The agent then receives +an immediate reward rt and observes the new state. This +interaction experience is collected as a tuple (st, at, rt, st+1) +which is stored in a replay buffer D. This tuple is used to learn +an optimal policy π∗ that maximises a return R–the sum of +discounted immediate rewards. +The introduction of GSSM into the system causes few +changes in the typical RL learning process. To avoid confusing +notation, we hereafter use st to denote RL state vectors. +Fig.1 shows the overview diagram of our RL-GSSM system. +The system has two phases–interaction and updating phases– +described as follows. At each time step in the interaction +phase, fF ilter observes ot, updates the RNN’s hidden states, +and generates state-representations xt. The agent then selects +an action at based on st = [ot; xt; ct]T , where ct is a control +target at time t. The action affects the environment, the system +moves into the next time step, and the process repeats. The +interactions are stored as ([ot; ct]T , at, rt, [ot+1); ct+1)]T ) in +a Trajectory Buffer. +The updating phase begins with drawing sampled trajec- +tories (˜o0:T ) from the Trajectory Buffer and using them to +update the GSSM. After that, the updated fF ilter is used to +generate new trajectories of st corresponding to ˜o0:T . The +new st trajectories are then converted into new RL experience +tuples stored in a typical Replay Buffer, and the RL agent is +updated following a typical method. +C. RL-GSSM setup for controlling planar movements +The environment here is a neuromechanical model built in +OpenSim. The model has a human arm placed on an arm +support that moves with low friction on a table Fig.2b. The + +Fig. 1. (a) Diagram showing the overview of our RL-GSSM system. The dash blue line splits RL and GSSM. The GSSM’s parts in yellow boxes are excluded +during the interaction phase. This phase starts with the initialisation (on the left) and evolves as follows. At the time step t, The previous action at−1 are +appended to the state-representations of the previous time step xt−1. The Filter then combines the appended vector with the incoming observation ot and +samples the state-representations of the current time step xt. The average of xt, denoted as ¯xt, is concatenated with ot and a control target ct and become an +RL’s state vector st. The interaction data are stored in Trajectory Buffer. (b) Diagram showing the overview of the training phase that begins with sampling +the stored trajectories and updating GSSM. The updated Filter is then used to generate new RL’s experience tuples which are used to update the RL agent. +model has 6 muscles; 4 muscles labelled in the figure are stim- +ulated. The muscles are fatigued progressively as a function +of the stimulation (see [1] for more details). The observable +environment states are the angle and angular velocities of the +shoulder and elbow (ot = [θs,t; θe,t; ˙θs,t; ˙θe,t]T ). +The RL algorithm of choice is soft actor-critic [16]. Both +actor and critic are parameterised by fully-connected neural +networks with two hidden layers. The actor’s output layer has a +sigmoid activation function to squash the outputs within [0, 1]. +The RL task here is to apply the muscle stimulation to move +the arm to the desired poses which are specified by target +joint angles–shoulder and elbow (θtar,t). The state vector st +is [ot; xt; θtar,t]T . The action vector at comprises normalised +stimulation intensities (i ∈ [0, 1]) of the stimulated muscles. +The immediate reward rt is simply computed using the square +error and action penalty as rt = −(θt − θtar,t)2 − Σn +i=0ai +n +, +where n is the number of stimulated muscles. +The training is episodic. Each episode has 100 time steps +with a 100 ms time step size. The episodes begin at random +poses, targets, and fatigue levels. A new random target is +assigned at the 50th time step. Every 5 training episodes, the +control performances are evaluated in rmse measure on 50 test +episodes with the same settings as the training episodes. +III. RESULTS +A. Ensemble transition function +We replace fT rans of the original model [14], denoted as +fT r,Ori, with RP-Ensemble, denoted as fT r,Ens, to address the +overconfidence issue. We test both models on a benchmarking +function–Kink [17]. Fig.2a shows the learned transitions. Both +fT r,Ori and fT r,Ens produce good predictive means. However, +fT r,Ori is overconfident as presented by low predictive vari- +ances at the locations where the data, represented by x marks, +are absent. In contrast, fT r,Ens has higher predictive variances +at those locations. +B. Controlling planar arm movements +We train our RL-GSSM to control planar arm movements +under progressive muscular fatigue through muscle stimula- +tion. We explore 3 cases: the 1) RL-ideal and RL-vanilla cases +where the fatigue is observable and unobservable, respectively; +and 3) RL-GSSM case. The RL agents are trained for 100 +episodes in all cases; the training is repeated 10 times. +Fig.2c shows the performance evaluations in rmse measure +along the training. RL-vanilla’s performance has the steepest +improvement at the beginning but stagnates at the worst +levels. RL-GSSM’s curve, compared to RL-ideal, has higher +standard deviations in the early period because the agents have +to simultaneously learn the controls and follow the not-yet- +converged GSSM. RL-GSSM’s performance improves slightly +slower but can reach the same level in 100 episodes. +Fig.3 shows the control behaviours in tracking an arbitrary +trajectory. The agents can produce good tracking in all cases. +The grey circles highlight good comparison points. Both RL- +ideal (Fig.3a) and RL-GSSM (Fig.3c) can bring the shoulder +and elbows to the [45◦, 45◦] targets anytime when requested. +RL-vanilla, however, tends to lose its performance in the +second half as the actual angles increasingly deviate from the +targets (Fig.3b). Fig.3d-f show the stimulation (solid lines) and +%maximum force that the muscles can produce (dash lines). +The %maximum force decreases over time as the stimulation +induces muscular fatigue. Compared to RL-ideal (Fig.3d), RL- +vanilla (Fig.3e) over stimulates and causes the rapid declines +of the muscle forces. The declines in RL-GSSM and RL-ideal +cases are at the same rate in average. RL-GSSM’s stimulation +has small noises along the session. + +GSSM +RL +Training +Xo +UpdatableO +Network +Q ++ +RP +(frozen) +ainit +Filter +Initialisation +RNN +(ho)Fig. 2. (a) The learnt kink function of the (left) original GSSM and (right) the GSSM with RP-Ensemble transition function. (b) Neuromechanical model of +planar arm movement built in OpenSim. (c) The control performances evaluated along the training. The shades show the standard deviations of 10 runs. +Fig. 3. Control behaviours in tracking an arbitrary target trajectory. (a-c) The plots showing the targets (dash) and the actual angles (solid) are achieved in (a) +RL − ideal, (b) RL − vanilla, and (c) RL − GSSM cases. (d-f) %maximum stimulation that the RL agents apply on the muscles (solid) and %maximum +forces that the muscles can produce (dash). The %maximum forces decrease in response to the muscular fatigue induced by the stimulation. +IV. CONCLUSIONS +We present a AI-based approach for controlling FES under +progressive muscular fatigue. Our RL-GSSM approach uses +RL to learn the control policies and GSSM, modified to +address the overconfidence issue, to provide Makovian state- +representations to the RL. We demonstrate our approach to +controlling arbitrary planar arm movements using a detailed +neuromechanical model. We show that our RL-GSSM can +achieve and maintain its control performances at the same level +as the ideal case where the fatigue is observable. +REFERENCES +[1] N. Wannawas, M. Subramanian, and A. A. Faisal, “Neuromechanics- +based deep reinforcement learning of neurostimulation control in fes +cycling,” in Intl. IEEE/EMBS Conf. on Neural Engineering (NER), 2021. +[2] A. Anand et al., “A deep reinforcement learning based approach towards +generating human walking behabior with a neuromuscular model,” in +19th Intl. Conf. on Humanoid Robots, 2019. +[3] P. Thomas et al., “Creating a reinforcement learning controller for +functional electrical stimulation of a human arm,” in 14th Yale Workshop +on Adaptive and Learning Systems, 2008. +[4] K. M. Jagodnik et al., “Human-like rewards to train a reinforcement +learning controller for planar arm movement,” IEEE Trans on Human- +Machine Systems, vol. 46, pp. 723–733, 10 2016. +[5] D. N. Wolf, Z. A. Hall, and E. M. Schearer, “Model learning for control +of a paralyzed human arm with functional electrical stimulation,” in +IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020, p. 10148. +[6] N. Wannawas, A. Shafti, and A. A. Faisal, “Neuromuscular reinforce- +ment learning to actuate human limbs through fes,” in IFESS22, 2022. +[7] J. Abreu et al., “Deep reinforcement learning for control of time-varying +musculoskeletal systems with high fatigability: a feasibility study,” in +IEEE Trans. Neural Sys. and Rehab. Eng., 2022. +[8] B. Woods, M. Subramanian, A. Shafti, and A. A. Faisal, “Mechanomyo- +graphy based closed-loop functional electrical stimulation cycling sys- +tem,” in 7th IEEE Intl. Conf. on Biomed. Robotics and Biomechatronics, +vol. 2018-Augus. +IEEE, 8 2018, pp. 179–184. +[9] M. Islam et al., “Mechanomyography responses characterize altered +muscle function during electrical stimulation-evoked cycling in individ- +uals with spinal cord injury,” Clinical Biomechanics, vol. 58, 2018. +[10] J. Naeem et al., “Electrical stimulator with mechanomyography-based +real-time monitoring, muscle fatigue detection, and safety shut-off: A +pilot study,” Biomedizinische Technik, vol. 65, 2020. +[11] E. Krueger et al., “Neuromuscular fatigue detection by mechanomyogra- +phy in people with complete spinal cord injury,” Research on Biomedical +Engineering, vol. 36, pp. 203–212, 2020. +[12] A. J. Del-Ama, ´Angel Gil-Agudo, J. L. Pons, and J. C. Moreno, +“Hybrid fes-robot cooperative control of ambulatory gait rehabilitation +exoskeleton,” J. NeuroEngineering and Rehabilitation, vol. 11, 2014. +[13] K. H. Ha et al., “An approach for the cooperative control of fes with a +powered exoskeleton during level walking for persons with paraplegia,” +IEEE Trans on Neural Sys. and Rehab. Eng., vol. 24, 2016. +[14] R. G. Krishnan, U. Shalit, and D. Sontag, “Structured inference networks +for nonlinear state space models,” in AAAI, 2017. +[15] I. Osband, J. Aslanides, and A. Cassirer, “Randomized prior functions +for deep reinforcement learning,” in NIPS, 2018. +[16] T. Haarnoja et al., “Soft actor-critic algorithms and applications,” +arXiv:1812.05905v2 [cs.LG], 2019. +[17] A. D. Ialongo et al., “Overcoming mean-field approximations in recur- +rent gaussian process models,” in 36th ICML, 2019. + +30 - +Original +Ensemble +Obs-Fatigue +Not-Obs-Fatigue +25 +GSSM +Deltoid +-1 +Posterior +Pectoralis major C +E 20 + +×-2 +Brachialis +-3 +Table +-4 +Triceps +10 +True function +True function +Medial +5 +Arm +Learned function +Learned function +Support +-6 +4 +2 +0 +4 +2 +0 +5 +6 +-6 +20 +30 +40 +50 +60 +70 +80 +90 +100 +Xt 1 +Xt-1 +a +Training Episode +cRL-ideal (observablefatigue) +RMSE: 7.02 ° +RL-vanilla (unobservable fatigue) RMSE: 8.05 +RL-GSSM +RMSE: 6.84 +100 +b +c +80 +60 +Angle [ +40 +20 + Shoulder +Elbow + Shoulder +Elbow +Shoulder +Elbow +0 +Biceps +Triceps +Pect. Maj. +Deltoid Post. += Biceps +Triceps +Pect. Maj. +Deltoid Post. +Biceps +Triceps +Pect. Maj. +Deltoid Post. +Force +d +e +Stimulation (%) +80 +Max. Muscle F +60 +40 +20 +& +0 +0 +10 +15 +20 +25 +30 +35 +40 +45 +50 +55 +60 0 +5 +10 +15 +20 +25 +30 +35 +40 +45 +50 +55 +60 0 +5 +10 +15 +25 +30 +35 +40 +45 +55 +60 +time [s] +time [s] +time [s] \ No newline at end of file diff --git a/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/load_file.txt b/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..7314b8b9c1162a89cf338f9c4e9b89b8333f602a --- /dev/null +++ b/AtE2T4oBgHgl3EQfnAjp/content/tmp_files/load_file.txt @@ -0,0 +1,318 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf,len=317 +page_content='Towards AI-controlled FES-restoration of arm movements: Controlling for progressive muscular fatigue with Gaussian state-space models Nat Wannawas Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' of Bioengineering, Imperial College London London, UK nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='wannawas18@imperial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='uk A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Aldo Faisal Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' of Bioengineering & Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' of Computing, Imperial College London, London, UK Chair of Digital Health & Data Science, University of Bayreuth Bayreuth, Germany aldo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='faisal@imperial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='uk Abstract—Reaching disability limits an individual’s ability in performing daily tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Surface Functional Electrical Stimulation (FES) offers a non-invasive solution to restore lost ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' However, inducing desired movements using FES is still an open engineering problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This problem is accentuated by the complexities of human arms’ neuromechanics and the variations across individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Yet, one remaining challenge of controlling FES systems for RL is unobservable muscle fatigue that progressively changes as an unknown function of the stimulation, thereby breaking the Markovian assumption of RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' In this work, we present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Our method is based on a Gaussian State- Space Model (GSSM) that utilizes recurrent neural networks to learn Markovian state-spaces from partial observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The GSSM is used as a filter that converts the observations into the state-space representation for RL to preserve the Markovian assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Here, we start with presenting the modification of the original GSSM to address an overconfident issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We then present the interaction between RL and the modified GSSM, followed by the setup for FES control learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We test our RL- GSSM system on a planar reaching setting in simulation using a detailed neuromechanical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The results show that the GSSM can help improve the RL’s control performance to the comparable level of the ideal case that the fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Index Terms—Functional Electrical Stimulation, FES, Gaus- sian State-Space Model, Reinforcement Learning, Arm Motions I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' INTRODUCTION Yearly, strokes and spinal cord injuries have left individuals around the world with paralysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Upper body paralysis, one of the most commonly found following incidents, causes the dysfunction of arm movements and severely affect the individ- uals’ abilities in performing daily tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Functional Electrical Stimulation (FES), a technique that uses electrical signals to induce muscle contraction, offers a solution for restoring the movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Yet, controlling FES to induce desired movements We acknowledge funding from the Royal Thai Government Scholarship to NW and a UKRI Turing AI Fellowship to AAF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' is challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' One challenge is that each individual requires customised stimulation to induce a certain movement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This causes difficulties in designing a control method that works across different individuals without intensive, manual config- urations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Another challenge is that the muscle’s responses to the FES change over time because of muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Since the fatigue level can not be directly measured, it is difficult for a controller to maintain its performance over extended periods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Several methods that can automatically find customised stimulation have been investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' One of those is Reinforce- ment Learning (RL), a machine learning algorithm with a learning agent (RL agent) that learns to control an environment through interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The successes of RL in controlling body movements have been presented in several scenarios: cycling [1], walking [2], arm movements [3]–[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Additionally, [7] shows that RL can deal with fatigue to a certain degree;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' yet, the performance drop is still inevitable in many cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Different approaches have been employed to deal with muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A widely used approach is to record elec- tromyogram (EMG) or mechanomyogram (MMG) from which muscle force can be estimated [8]–[11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Although this ap- proach could be straightforward, the successes are, currently, limited to a few types of movements such as knee extension [10], [11] and cycling [8], [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Additionally, it requires sensors which can be difficult to set up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Approaches that exploit the periodic nature of the movements such as walking are used in [12], [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' However, these may not be suitable to be used in controlling arbitrary arm movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Our previous work [6] explores an approach that does not use dedicated sensors and can be applied to arbitrary movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The approach uses a recurrent neural network (RNN) to encode the history of observations and provide additional information to the RL agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This strategy can control arbitrary single- joint movements in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' However, its capability in multiple-joint cases is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' In this work, we present an AI-based system for con- trolling FES that can induce arbitrary desired movements and can maintain performance under progressive muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Our system uses the combination of an RNN-based arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='04005v1 [eess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='SY] 10 Jan 2023 Gaussian state-space model (GSSM) that learns Markovian state-representations and RL that learns the control policies on the representation spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' In simple terms, the GSSM here functions as a filter that provides insight information of the systems’ states to the RL agents, allowing the agents to select better actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Compared to our previous work [6], this system is more powerful and capable of leaning the complex dynamics of multiple-joint movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Additionally, it produces probabilistic transition functions that can be useful, for example, for model-based RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We present the details of our RL-GSSM system and the setup for controlling arbitrary movements in the Methods section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We also provide the modification of the original GSSM [14] to address an overconfident issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We demonstrate our system in a planar arbitrary reaching setting using a detailed neuromechanical model and show that our system can achieve and maintain the control performance at the same level as the ideal case in which muscle fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' METHODS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Gaussian State-Space Models (GSSM) Here, GSSM functions as a filter that converts an observable environment’s state vector (ot) into a state-representation vec- tor (xt) which contains the information of the system’s hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Our GSSM is based on [14] whose main components are an RNN-based filter (fF ilter) and a transition function (fT ran).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The filter converts ot into xt through a process described as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The process starts at the zeroth time step (t = 0) with the initialisation of the RNN’s hidden states (h0) and state representations (x0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' x0 is then concatenated with the initial action vector ainit and is passed through Ws, a small multilayer perceptron (MLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This step is mathemati- cally expressed as hx,t=0 = Ws([x0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' a0]T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Meanwhile, the RNN observes the environment’s states o0 and updates its hidden state to ht=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' hx,t=0 and ht=1 are then combined as hc,t=1 = 1 2 tanh(hx,t=0 + ht=1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Next, hc,t=1 is passed through Wx which is an MLP that outputs the distribution of xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The following time steps repeat this process but start with the sampled xt and actual actions at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The trajectory of xt, denoted as x0:T , is obtained by repeating this process through the whole trajectory of observations o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' For future notation, RNN, Wh, and Wx are referred collectively as fF ilter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The GSSM is trained using the trajectory of observations (o0:T ) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The training process starts with using fF ilter to sample x0:T corresponding to o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Next, we reconstruct the observations by passing the sampled x0:T through the obser- vation mapping function Wg, expressed as k0:T = Wg(x0:T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The parameters of fF ilter are optimised through gradient descent to minimise the following loss functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The first loss function is the likelihood between k0:T and o0:T , expressed as llik = �T t=1 p(ot|µk,t, Σk,t), where µk,t and Σk,t are the mean and covariance of the reconstructed observations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The second loss function is the KL divergence between the x0:T distribution sampled by fF ilter and those predicted by fT ran, expressed as lDKL = T � t=2 DKL[fF ilter(xt−1, o0:t)||fT ran(xt−1)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Intuitively, this loss function encourages the filter-generated distribution of xt, pf(xt), to have a Markovian structure, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='e, pf(xt|xt−1, o0:t) = p(xt|xt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Note that the observation history o0:t−1 is encoded in the RNN’s hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' In the original model [14], fT ran is represented by a neural network that directly outputs the means and variances of xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This network produces overconfidence in the learned transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' To mitigate this issue, we replace that network with the ensemble of neural networks with randomised prior functions (RP-Ensemble) [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The predictive means and vari- ances are computed by fitting Gaussian distributions to the ensemble’s outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Generic RL-GSSM for controlling arbitrary movements Reinforcement Learning (RL) learns a task through reward signals collected from interactions with an environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The interactions occur in a discrete-time fashion, starting with the agent observing the environment’s state st and selecting an action at based on its policy π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The action causes the environment to be in a new state st+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The agent then receives an immediate reward rt and observes the new state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This interaction experience is collected as a tuple (st, at, rt, st+1) which is stored in a replay buffer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This tuple is used to learn an optimal policy π∗ that maximises a return R–the sum of discounted immediate rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The introduction of GSSM into the system causes few changes in the typical RL learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' To avoid confusing notation, we hereafter use st to denote RL state vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='1 shows the overview diagram of our RL-GSSM system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The system has two phases–interaction and updating phases– described as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' At each time step in the interaction phase, fF ilter observes ot, updates the RNN’s hidden states, and generates state-representations xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The agent then selects an action at based on st = [ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' ct]T , where ct is a control target at time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The action affects the environment, the system moves into the next time step, and the process repeats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The interactions are stored as ([ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' ct]T , at, rt, [ot+1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' ct+1)]T ) in a Trajectory Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The updating phase begins with drawing sampled trajec- tories (˜o0:T ) from the Trajectory Buffer and using them to update the GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' After that, the updated fF ilter is used to generate new trajectories of st corresponding to ˜o0:T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The new st trajectories are then converted into new RL experience tuples stored in a typical Replay Buffer, and the RL agent is updated following a typical method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-GSSM setup for controlling planar movements The environment here is a neuromechanical model built in OpenSim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The model has a human arm placed on an arm support that moves with low friction on a table Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (a) Diagram showing the overview of our RL-GSSM system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The dash blue line splits RL and GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The GSSM’s parts in yellow boxes are excluded during the interaction phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' This phase starts with the initialisation (on the left) and evolves as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' At the time step t, The previous action at−1 are appended to the state-representations of the previous time step xt−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The Filter then combines the appended vector with the incoming observation ot and samples the state-representations of the current time step xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The average of xt, denoted as ¯xt, is concatenated with ot and a control target ct and become an RL’s state vector st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The interaction data are stored in Trajectory Buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (b) Diagram showing the overview of the training phase that begins with sampling the stored trajectories and updating GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The updated Filter is then used to generate new RL’s experience tuples which are used to update the RL agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' model has 6 muscles;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 4 muscles labelled in the figure are stim- ulated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The muscles are fatigued progressively as a function of the stimulation (see [1] for more details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The observable environment states are the angle and angular velocities of the shoulder and elbow (ot = [θs,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' θe,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' ˙θs,t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' ˙θe,t]T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The RL algorithm of choice is soft actor-critic [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Both actor and critic are parameterised by fully-connected neural networks with two hidden layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The actor’s output layer has a sigmoid activation function to squash the outputs within [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The RL task here is to apply the muscle stimulation to move the arm to the desired poses which are specified by target joint angles–shoulder and elbow (θtar,t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The state vector st is [ot;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' θtar,t]T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The action vector at comprises normalised stimulation intensities (i ∈ [0, 1]) of the stimulated muscles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The immediate reward rt is simply computed using the square error and action penalty as rt = −(θt − θtar,t)2 − Σn i=0ai n , where n is the number of stimulated muscles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The training is episodic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Each episode has 100 time steps with a 100 ms time step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The episodes begin at random poses, targets, and fatigue levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A new random target is assigned at the 50th time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Every 5 training episodes, the control performances are evaluated in rmse measure on 50 test episodes with the same settings as the training episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RESULTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Ensemble transition function We replace fT rans of the original model [14], denoted as fT r,Ori, with RP-Ensemble, denoted as fT r,Ens, to address the overconfidence issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We test both models on a benchmarking function–Kink [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='2a shows the learned transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Both fT r,Ori and fT r,Ens produce good predictive means.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' However, fT r,Ori is overconfident as presented by low predictive vari- ances at the locations where the data, represented by x marks, are absent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' In contrast, fT r,Ens has higher predictive variances at those locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Controlling planar arm movements We train our RL-GSSM to control planar arm movements under progressive muscular fatigue through muscle stimula- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We explore 3 cases: the 1) RL-ideal and RL-vanilla cases where the fatigue is observable and unobservable, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' and 3) RL-GSSM case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The RL agents are trained for 100 episodes in all cases;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' the training is repeated 10 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='2c shows the performance evaluations in rmse measure along the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-vanilla’s performance has the steepest improvement at the beginning but stagnates at the worst levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-GSSM’s curve, compared to RL-ideal, has higher standard deviations in the early period because the agents have to simultaneously learn the controls and follow the not-yet- converged GSSM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-GSSM’s performance improves slightly slower but can reach the same level in 100 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3 shows the control behaviours in tracking an arbitrary trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The agents can produce good tracking in all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The grey circles highlight good comparison points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Both RL- ideal (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3a) and RL-GSSM (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3c) can bring the shoulder and elbows to the [45◦, 45◦] targets anytime when requested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-vanilla, however, tends to lose its performance in the second half as the actual angles increasingly deviate from the targets (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3d-f show the stimulation (solid lines) and %maximum force that the muscles can produce (dash lines).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The %maximum force decreases over time as the stimulation induces muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Compared to RL-ideal (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3d), RL- vanilla (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='3e) over stimulates and causes the rapid declines of the muscle forces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The declines in RL-GSSM and RL-ideal cases are at the same rate in average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' RL-GSSM’s stimulation has small noises along the session.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' GSSM RL Training Xo UpdatableO Network Q + RP (frozen) ainit Filter Initialisation RNN (ho)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (a) The learnt kink function of the (left) original GSSM and (right) the GSSM with RP-Ensemble transition function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (b) Neuromechanical model of planar arm movement built in OpenSim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (c) The control performances evaluated along the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The shades show the standard deviations of 10 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Control behaviours in tracking an arbitrary target trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (a-c) The plots showing the targets (dash) and the actual angles (solid) are achieved in (a) RL − ideal, (b) RL − vanilla, and (c) RL − GSSM cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' (d-f) %maximum stimulation that the RL agents apply on the muscles (solid) and %maximum forces that the muscles can produce (dash).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' The %maximum forces decrease in response to the muscular fatigue induced by the stimulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' CONCLUSIONS We present a AI-based approach for controlling FES under progressive muscular fatigue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Our RL-GSSM approach uses RL to learn the control policies and GSSM, modified to address the overconfidence issue, to provide Makovian state- representations to the RL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We demonstrate our approach to controlling arbitrary planar arm movements using a detailed neuromechanical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' We show that our RL-GSSM can achieve and maintain its control performances at the same level as the ideal case where the fatigue is observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' REFERENCES [1] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Wannawas, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Subramanian, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Faisal, “Neuromechanics- based deep reinforcement learning of neurostimulation control in fes cycling,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' IEEE/EMBS Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' on Neural Engineering (NER), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [2] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Anand et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “A deep reinforcement learning based approach towards generating human walking behabior with a neuromuscular model,” in 19th Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' on Humanoid Robots, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [3] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Creating a reinforcement learning controller for functional electrical stimulation of a human arm,” in 14th Yale Workshop on Adaptive and Learning Systems, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [4] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Jagodnik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Human-like rewards to train a reinforcement learning controller for planar arm movement,” IEEE Trans on Human- Machine Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 46, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 723–733, 10 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Wolf, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Hall, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Schearer, “Model learning for control of a paralyzed human arm with functional electrical stimulation,” in IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' on Robotics and Automation (ICRA), 2020, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 10148.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [6] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Wannawas, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Shafti, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Faisal, “Neuromuscular reinforce- ment learning to actuate human limbs through fes,” in IFESS22, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Abreu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Deep reinforcement learning for control of time-varying musculoskeletal systems with high fatigability: a feasibility study,” in IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Neural Sys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' and Rehab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [8] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Woods, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Subramanian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Shafti, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Faisal, “Mechanomyo- graphy based closed-loop functional electrical stimulation cycling sys- tem,” in 7th IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' on Biomed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Robotics and Biomechatronics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 2018-Augus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' IEEE, 8 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 179–184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Islam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Mechanomyography responses characterize altered muscle function during electrical stimulation-evoked cycling in individ- uals with spinal cord injury,” Clinical Biomechanics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 58, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Naeem et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Electrical stimulator with mechanomyography-based real-time monitoring, muscle fatigue detection, and safety shut-off: A pilot study,” Biomedizinische Technik, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 65, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [11] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Krueger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Neuromuscular fatigue detection by mechanomyogra- phy in people with complete spinal cord injury,” Research on Biomedical Engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 36, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 203–212, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Del-Ama, ´Angel Gil-Agudo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Pons, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Moreno, “Hybrid fes-robot cooperative control of ambulatory gait rehabilitation exoskeleton,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' NeuroEngineering and Rehabilitation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 11, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [13] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “An approach for the cooperative control of fes with a powered exoskeleton during level walking for persons with paraplegia,” IEEE Trans on Neural Sys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' and Rehab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 24, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Krishnan, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Shalit, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Sontag, “Structured inference networks for nonlinear state space models,” in AAAI, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Osband, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Aslanides, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Cassirer, “Randomized prior functions for deep reinforcement learning,” in NIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Haarnoja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Soft actor-critic algorithms and applications,” arXiv:1812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='05905v2 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='LG], 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Ialongo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=', “Overcoming mean-field approximations in recur- rent gaussian process models,” in 36th ICML, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' 30 - Original Ensemble Obs-Fatigue Not-Obs-Fatigue 25 GSSM Deltoid 1 Posterior Pectoralis major C E 20 + ×-2 Brachialis 3 Table 4 Triceps 10 True function True function Medial 5 Arm Learned function Learned function Support 6 4 2 0 4 2 0 5 6 6 20 30 40 50 60 70 80 90 100 Xt 1 Xt-1 a Training Episode cRL-ideal (observablefatigue) RMSE: 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='02 ° RL-vanilla (unobservable fatigue) RMSE: 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='05 RL-GSSM RMSE: 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content='84 100 b c 80 60 Angle [ 40 20 Shoulder Elbow Shoulder Elbow Shoulder Elbow 0 Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' = Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Biceps Triceps Pect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Maj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Deltoid Post.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Force d e Stimulation (%) 80 Max.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} +page_content=' Muscle F 60 40 20 & 0 0 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 25 30 35 40 45 55 60 time [s] time [s] time [s]' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE2T4oBgHgl3EQfnAjp/content/2301.04005v1.pdf'} diff --git a/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/2301.04882v1.pdf.txt b/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/2301.04882v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..a2b41c3a8221e2124b96f83dfeb32c7031a586d0 --- /dev/null +++ b/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/2301.04882v1.pdf.txt @@ -0,0 +1,2112 @@ +ZScribbleSeg: Zen and the Art of Scribble +Supervised Medical Image Segmentation +Ke Zhang and Xiahai Zhuang ⋆ +School of Data Science, Fudan University, Shanghai +zxh@fudan.edu.cn +Abstract. Curating a large scale fully-annotated dataset can be both +labour-intensive and expertise-demanding, especially for medical images. +To alleviate this problem, we propose to utilize solely scribble anno- +tations for weakly supervised segmentation. Existing solutions mainly +leverage selective losses computed solely on annotated areas and gen- +erate pseudo gold standard segmentation by propagating labels to ad- +jacent areas. However, these methods could suffer from the inaccurate +and sometimes unrealistic pseudo segmentation due to the insufficient +supervision and incomplete shape features. Different from previous ef- +forts, we first investigate the principle of ”good scribble annotations”, +which leads to efficient scribble forms via supervision maximization and +randomness simulation. Furthermore, we introduce regularization terms +to encode the spatial relationship and shape prior, where a new formu- +lation is developed to estimate the mixture ratios of label classes. These +ratios are critical in identifying the unlabeled pixels for each class and +correcting erroneous predictions, thus the accurate estimation lays the +foundation for the incorporation of spatial prior. Finally, we integrate +the efficient scribble supervision with the prior into a unified framework, +denoted as ZScribbleSeg, and apply the method to multiple scenarios. +Leveraging only scribble annotations, ZScribbleSeg set new state-of-the- +arts on four segmentation tasks using ACDC, MSCMRseg, MyoPS and +PPSS datasets. +Keywords: Medical Image Segmentation· Scribble Supervision· Mix- +ture Model· Medical Image Analysis +In recent years, deep neural networks has demonstrated its potential on various +visual tasks [25]. However, the success of these methods relies on massive anno- +tations, which require assiduous manual efforts. For medical imaging, the dense +manual labeling can take several hours to annotate just one image for experi- +enced doctors, which is both expensive and expertise-demanding [60]. Humor- +ous efforts have contributed to the area of training segmentation networks with +weaker annotations [39], including scribbles [27], bounding boxes [34], points [2], +and image-level labels [35]. Numerous studies have been reported utilizing only +⋆ Xiahai Zhuang is corresponding author. This work was funded by the National Nat- +ural Science Foundation of China (Grant No. 61971142 and 62111530195). +arXiv:2301.04882v1 [cs.CV] 12 Jan 2023 + +2 +K Zhang & X Zhuang +image-level labels [15,46,50,45]. These methods mainly rely on large-scale train- +ing datasets, and tend to underperform on small medical image datasets. On the +contrary, scribbles are suitable for labeling nested structures and easy to obtain +in practice. Several works have demonstrated their potential on both semantic +and medical image segmentation [17,21,27]. Therefore, we propose to investigate +this specific form of weakly supervised segmentation, which only uses scribble +annotations for model training. +Conventionally, scribble annotations are mainly focused on delineating the +structure of interests [42]. This can be effective in segmenting regular structures, +i.e., the targets with fixed shape patterns. Hence, this task is also referred to +as regular structure segmentation. However, such methods could be challenged +when they were applied to portray the irregular targets with heterogeneous dis- +tributions, such as pathologies. This is also referred to as irregular (object) seg- +mentation, which is particularly challenging for the medical tasks with small +training datasets. Existing scribble learning approaches mainly aim to recon- +struct complete labels from scribbles, and use the generated pseudo labels for +model training. These works include 1) label expansion strategies that assume +the pixels with similar features are likely to be in the same category [16,27], +and 2) ensemble methods that generate labels by fusing several independent +predictions [29]. These methods could be susceptible to the label noises intro- +duced by imprecise segmentation proposals. To overcome this issue, Obukhov +et al. proposed a regularization loss [32], which exploited the similarity between +labeled and unlabeled area. Adversarial learning approach has also been applied +to scribble supervised segmentation [42], by leveraging shape prior provided by +additional full annotations. +Scribble supervised segmentation generally suffers from inadequate supervi- +sion and imbalanced label classes. This leads to poor results, typically of under +segmentation of target structures, meaning the volumes of segmented structures +tend to be shrunk, as we shall describe in Section 2.3. To address the problem +of inadequate supervision, we first investigate the principles of generating ”good +scribbles”, as a guidance for designing methodologies to augment supervision, as +well as for generating manual annotations. The aim is to model efficient scrib- +bles by maximizing the supervision without increasing annotation efforts. Our +studies demonstrate that the model training benefit from the randomness of +wide range distributed scribbles and larger proportion of annotated areas. In- +spired by this, we propose to simulate such types of scribble-annotated images +as a means of supervision augmentation. This can be achieved via mixup and +occlusion operations on existing training images, and the supervision augmen- +tation is coupled with regularization terms penalizing any inconsistency in the +segmentation results. +Despite the lack of supervision, the scribble annotations typically have imbal- +anced annotated label proportions thus biased shape information. This means +the model cannot accurately capture the global shape of target structures. We +therefore further propose to correct the problematic prediction using prior-based +regularization, particularly from the spatial prior. This requires the preceding + +ZScribbleSeg +3 +yet critical step of estimating the mixture proportion (ratio) of each label class +(referred to as π prior). We hence propose a new algorithm to compute this π +prior, based on which we develop a spatial loss on the basis of marginal proba- +bility of pixels belonging to certain label classes and spatial energy. This spatial +loss is a regularization term aimed to correct the shape of segmentation results. +The supervision augmentation and prior-based regularization work in a comple- +mentary way, and both contribute to the stable and robust training on a variety +of segmentation tasks. +The proposed scribble supervision-based segmentation method, referred to +as ZScribbleSeg, extends and generalizes the algorithms in our two preliminary +works [52,53], and has more scientific significance in the following aspects: Firstly, +we investigate principles of efficient scribble forms to guide the supervision aug- +mentation, which have never be reported to the best of our knowledge. Secondly, +we leverage spatial prior to adjust the predicted probability with computed spa- +tial energy. Thirdly, we implement a series of extensive experiments on various +scenarios, including irregular structure segmentation of medical pathology and +visual object segmentation. The contributions of this paper are summarized as +follows. +– We propose a unified framework for scribble-supervised segmentation by +modeling efficient scribbles, and correcting the network prediction with prior +regularization, which significantly alleviates the problems of inadequate su- +pervision and imbalanced label classes. +– To the best of our knowledge, this is the first work investigating the principles +of scribble forms. Motivated by the conclusion that network benefits from +larger and randomly distributed annotation, we model efficient scribbles by +maximizing supervision and simulating randomness. +– We propose a novel mechanism to correct the shape of model prediction +based on prior regularization, including π prior, spatial prior, and shape +prior. A new algorithm is introduced to estimate π prior, based on which we +further encode spatial relationship with spatial prior loss. +– Our approach achieved state-of-the-art performance for weakly-supervised +segmentation on regular structures from cardiac anatomical imaging, regular +structures from pathology enhanced imaging, irregular objects of medical +pathology, and human pose from natural scene. +The rest of this paper is organized as follows: Section 2 briefly introduces the +relevant researches. In section 3, we describe the modeling of efficient scribbles +and computation of prior. Section 4 presents the results of efficiency, ablation, +and validation study. Finally, we conclude this work in Section 5. +1 +Related work +This section provides a brief review of weakly supervised segmentation meth- +ods. Besides, we describe data augmentation strategies and regularization loss +functions that closely related to our work. + +4 +K Zhang & X Zhuang +Fig. 1. Roadmap of the proposed ZScribbleSeg framework. +1.1 +Weakly supervised segmentation +Recently, a variety of weakly supervised segmentation strategies have been de- +veloped to reduce the manual annotation efforts [27,2,34,35]. Among them, the +scribbles are of particular interest for the application to medical image annota- +tion, given by its advantage in annotating nest structures compared to bounding +boxes. Current weakly supervised learning methods with image-level annotations +mainly generate label seeds with Class Activation Map (CAM) [56] at first, and +then train the network with refined pseudo labels. However, the training of CAM +requires a large scale of training data labeled with rich visual classes, which is +not practical in clinical applications. Therefore, we propose to investigate the +scribble supervised segmentation, due to its efficiency and effectiveness in both +medical and visual scenarios. +Scribble is a form of sparse annotation that provides labels for a small sub- +set of pixels in an image [39]. Previous approaches mainly calculate losses for +annotated pixels. One group of works is designed to expand the annotations +and reconstruct the full label for network training. However, the expansion of +labels needs to be achieved through iterative computation, which is particularly +time-consuming. To alleviate it, several works removed the relabeling process +and instead adopted conditional random fields to perform the refinement of seg- +mentation results [9,7,55,40]. However, the common issue is the unstable model +training caused by noisy pseudo labels. + +Principles +Efficient scribbles +(Sec 3.2.1) +(Sec 3.2.2) +Maximal +Mixup +Randomness +Occlusion +supervision +zScribbleSeg +Lglobal +ZScribbleNet +Priors estimation +(Sec 3.3) +(Sec 3.4) +Prior T +Spatial priors +Shape priors +Lshape +个 +Energy +RankingZScribbleSeg +5 +To obtain high-quality pseudo labels and update it throughout the training +process, Luo et al. [29] proposed to mix the predictions from dual-branch net- +work as auxiliary pseudo label. This approach has achieved promising results on +cardiac segmentation, but still susceptible to inaccurate supervisions, especially +on more challenging tasks with irregular objects. Obukhov et al. [31] introduced +the Gated CRF loss for unlabeled pixels, which regularizes model training by +exploiting the structural similarity between labeled and unlabeled data. Other +works [42,54] included a new module to evaluate the quality of segmentation +masks, which encourages the predictions to be realistic, but requiring extra full +annotations. +1.2 +Data augmentation +Augmentation methods are investigated to improve the model generalization +ability, by synthesizing virtual training examples in the vicinity of the training +dataset [6]. Common strategies include random cropping, rotation, flipping and +adding noise [5]. Recently, a line of research works have been proposed on Mixup +augmentation [51,10,49,18,19], which blends two image-label pairs to generate +new samples for classification tasks. Input Mixup [51] was introduced to perform +linear interpolation between two images and their labels. Manifold Mixup [43] +applied the Mixup operation to feature space. Cutout [10] randomly occluded +a square region of image, and CutMix [49] transplanted the occluded area to +another image. Kim et al. [18] proposed Puzzle Mix to leverage the saliency +and local statistics to facilitate image combination. Comixup [19] extended this +concept from two images to multiple images. +For medical image analysis, Mixup methods have been adopted for image +segmentation [8] and object detection tasks [44]. Although mixup operation may +generate unrealistic samples, mixed soft labels can provide rich information and +improve the model performance on semi-supervised segmentation [8]. +1.3 +Regularization losses +Neural networks are used to perform pixel-wise image segmentation, typically +trained with cross entropy or Dice loss, which computes loss for each pixel in- +dependently. To predict segmentation coherent in the global sense [22], several +methods are proposed to regularize the model training. Here, we focus on the +consistency regularization and π prior regularization that most relevant to our +work. +The consistency regularization leverages the fact that the perturbed versions +of the same image patch should have the consistent segmentation. A series of +researches have been conducted on consistency regularization [57,23,41,33]. For +semi-supervised learning, regularization is applied to the augmented versions of +the input image by requiring consistency to obtain stable predictions for unla- +beled images [23,41,33]. + +6 +K Zhang & X Zhuang +Fig. 2. Overview of the training losses for the proposed ZScribbleNet, which consists +of modeling of efficient scribbles and computation of priors. The scribble modeling +includes mixup augmentation, regularized with global consistency (Lglobal). The priors +have three, i.e., class mixture ratios (π), spatial prior and shape prior, which contribute +to spatial prior loss (Lspatial) and shape prior loss (Lshape). Note that spatial prior loss +is complementary with the partial cross entropy loss (Lpce) which is solely calculated +for labeled pixels. +The proposed regularization of π prior is inspired from the binary mixture +proportion estimation [3,14,37], which was originally designed for binary (two- +class) positive unlabeled learning [11,12,20]. For multi-class segmentation, the +mixture ratios of classes are both imbalanced and inter-dependent, which cannot +be solved by existing binary estimation methods. +2 +Method +2.1 +Overview +Problem Setup: This work investigates the scenario of scribble supervised seg- +mentation, where the training images are solely annotated with a small number +of pixels, via scribbles, for each label class. +Strategy: Instead of solely focusing on techniques of weak supervision, we first +investigate different forms of scribbles to derive principles of efficient scribbles, +i.e., maximal supervision without increasing scribble efforts. These principles +enable effective and robust model training with minimal annotation cost. Then, +we focus on tackling the major problem of under segmentation, to correct model +prediction with prior. +Solution: We develop ZScribbleSeg consisting of (1) modeling efficient scrib- +bles via supervision maximization and randomness simulation. (2) modeling + +Priors +spatial (Eq.29) +shape +L +(Eq.30) +π (3.3.2) +Spatial +Corrected +Image 1 +Seg 1 +Scribble 1 +energy 1 +shape 1 +Ranking +A +Seg of +Mixed +Mixed +Mixed +mixed +Network +Seg +scribble +Image +image +π (3.3.2) +Spatial +Corrected +Image 2 +Seg 2 +Scribble 2 +energy 2 +shape 2 +Lshape +Ranking +(Eq.30) +4- +Priors +Spatial energy Correction Scribble +Mix +Image +SegZScribbleSeg +7 +and computation of prior, including label class proportion prior, spatial prior +and shape prior. (3) integration to develop deep neural network (referred to as +ZScribbleNet) having losses of partial cross entropy (Lpce), global consistency +(Lglobal), spatial prior loss (Lspatial), shape regularization (Lshape) and training +strategy of supervision augmentation and prior regularization. Figure 1 presents +the roadmap of the proposed framework. +2.2 +Principle and modeling of efficient scribbles +We investigate the principles of efficient scribbles and derive the objective of +maximizing supervision with minimal annotation efforts. This leads to the pro- +posal of supervision augmentation. In addition, we propose a global consistency +loss to penalize the non-equivalence in the augmentation. +Principles of efficient scribbles We shall verify the two principles of achiev- +ing efficient scribble annotation in terms of maximal supervision later through +the experiments in Section 3.2: +(1) The large proportion of pixels annotated by scribbles compared with the +whole set. +(2) The randomness of distribution of scribbles. This is represented by the ran- +dom and wide-range annotations. +Firstly, we are motivated by the knowledge that model training benefits from +the finer gradient flow through larger proportion of annotated pixels [39]. There- +fore, we try to increase the annotation proportion with the same effort. One +natural idea is to simply expand the width of scribbles. However, this way only +increases the label amount in local area, and lacks the ability to enlarge anno- +tation range across the entire image. +Secondly, we are inspired by the fact that the imaging data are easier to be +restored from random samples of pixels than from down-sampled low-resolution +images with regular patterns [13]. This was due to the fact that the randomly +and sparsely distributed samples maintain the global structure of the imaging +data, which therefore can be restored with existing low-rank or self-similarity +regularization terms. By contrast, the regularly down-sampled low-resolution +images have evidently reduced tensor ranks, compared with the original high- +resolution data, thus lose the global structure information. Motivated by this, we +assume the features of full segmentation (similarly to the global structure infor- +mation) can be portrayed (restored) with sparse scribble annotations randomly +and widely distributed within the entire dataset. With such scribble annotation, +the segmentation network can easily learn the global shape prior. +Based on the observations described above, we propose to model efficient +scribbles by supervision augmentation simulating large annotation proportion +and randomness of scribble distribution. +Modeling via supervision augmentation We aim to generate training im- +ages with efficient scribbles by maximizing the supervision via mixup operations + +8 +K Zhang & X Zhuang +and achieving the randomness via occlusion operations. This resembles data +augmentation, which increases the data diversity and enables robust training. +Search optimal annotation with mixup: Motivated by the principles of ef- +ficient scribble, we first seek the optimal scribble with large annotated ratio, +high supervision, and the unchanged local features. To achieve that, instead of +maximizing the annotations directly, we aim to maximize the saliency of mixed +images, which measures the sensitivity of model to inputs. Given that the an- +notated area tends to be accompanied with high saliency, maximizing saliency +also increases the scribble annotations. +For two image-scribble pairs (X1, Y1), (X2, Y2) of dimension n, we denote +the resulted mixed image-label pair as (X′ +12, Y ′ +12). The transportation process is +defined by: +X′ +12 = T(X1, X2) and Y ′ +12 = T(Y1, Y2), +(1) +T(X1, X2) = (1 − β) ⊙ � +1 X1 + β ⊙ � +2 X2, +(2) +where T(X1, X2) represents the transportation process between image X1 and +X2; � +i denotes the transportation matrix of size n×n for image Xi; β means the +mask with value [0, 1] of dimension n; ⊙ is the element-wise multiplication. Then, +we aim to maximize the saliency of transportation result over the parameters +{� +1, � +2, β}: +{� +1, � +2, β} = arg max +� +1,� +2,β +[(1 − β) ⊙ � +1M(X1) + β ⊙ � +2M(X2)], +(3) +where M(X) denotes the saliency map of image X, which is obtained by com- +puting the l2 norm of gradient values. We solve this optimization problem based +on PuzzleMix [18]. To preserve the local statistic features, the optimization ob- +jective also includes the image local smoothness, and the mixing weight prior. +For details of the optimization objective, we refer readers to PuzzleMix [18] and +Appendix A of supplementary materials. +Introduce randomness via occlusion: We propose to simulate randomly +distributed scribbles via occlusion. Specifically, one square area of the mixed +image is randomly dropped and replaced with the background. Since that the +proportion of the background annotated by scribbles tends to be smaller than +that of the foreground classes, the occlusion operation alleviates the imbalance +problem of class mixture ratios within labeled pixels, and further improves the +results of mixture ratio estimation, which will be elaborated in Section 2.3. +We denote the occluded image-label pair as (X′′, Y ′′), which is obtained by: +X′′ +12 = (1 − 1b) ⊙ X′ +12 +(4) +Y ′′ +12 = (1 − 1b) ⊙ Y ′ +12 +(5) +where 1b denotes a rectangular mask of size n × n with value in [0, 1]. The +rectangular mask is randomly rotated to occlude the mixed image, and turns + +ZScribbleSeg +9 +Fig. 3. Illustration of supervision augmentation and global consistency. Supervision +maximization is achieved with the mix augmentation to increase the annotated pro- +portion and data variety. Global consistency requires the segmentation result of mixed +image and unmixed image to be consistent. +the occluded area into background. Following [49], we set the size of rectangular +to be 32 × 32. +Global consistency loss: The objective of global consistency regularization is +to leverage the mix-invariant property. As Figure 3 shows, global consistency +requires the same image patch to have consistent segmentation in two scenarios, +i.e., the unmixed image and the mixed image. Let the segmentation result of +image X predicted by network be ˆY = f(X). For the transported image X′ +12 = +T(X1, X2), the consistency of mixup is formulated as: +T(f(X1), f(X2)) = f(T(X1, X2)), +(6) +which requires the segmentation of mixed image to be consistent with the mixed +segmentation, after the same transportation process. When applying the occlu- +sion operation, we further have: +(1 − 1b) ⊙ T( ˆY1, ˆY2) = f ((1 − 1b) ⊙ T(X1, X2)) . +(7) +Then, we propose to minimize the distance between two sides of Eq.(7). Let +u12 = (1 − 1b) ⊙ T( ˆY1, ˆY2) and v12 = f ((1 − 1b) ⊙ T(X1, X2)). The negative +cosine similarity Ln(u12, v12) is defined as: +Ln(u12, v12) = − +u · v +||u12||2 · ||v12||2 +. +(8) +Taking the symmetrical metric into consideration, we similarly penalize the in- +consistency between u21 and v21. Therefore, the global consistency loss is for- +mulated as: +Lglobal = 1 +2 [Ln(u12, v12) + Ln(u21, v21)] . +(9) + +Supervision +Global +Image 1 +Seg 1 +Scribble 1 +augmentation +consistency +pce +Seg mixed +Mixed scribble +Mixed seg +Mix +Occlusion +Lglobal +Network +(Eq.9) +pce +Seg 2 +Scribble 2 +Image 2 +pce10 +K Zhang & X Zhuang +Fig. 4. Illustration of spatial prior loss (Lspatial) for correction of prediction, via class +mixture ratios (π) and spatial prior (with spatial energy). +Discussion: Mixup operations could change the shape of target structures, re- +sulting in the unrealistic image. To tackle it, as shown in Figure 3, we propose +to combine the partial cross entropy (PCE) loss for labeled pixels of both mixed +and unmixed image, and leverage mix equivalence to preserve shape consistency +at global level. To further exploit the shape features, we propose to correct the +network prediction guided by computed prior, which is described in Section 2.3. +2.3 +Modeling and computation of prior +As shown in Figure 1, we model class mixture ratios, spatial prior, and shape +prior to better capture global shape information and regularize the network +training. As visualized in Figure 4, we compute the spatial energy to reflect the +probabilities of pixels belonging to each class. We propose a new formulation to +estimate critical prior of label class proportions, referred to as π, which guides +the correction of erroneous network prediction. +Problems statement The segmentation network trained with scribbles tends +to generate under segmentation results of the target structures. Considering that +the annotated ratio of classes can be imbalanced, the scribble supervised learning +also brings challenges to the estimation of class mixture ratios π. +Under segmentation: As shown in Figure 5, under segmentation refers to the +results, where the size of segmented structure is generally smaller than ground +truth, a phenomenon caused by the imbalanced annotated proportion and missed +shape information. To solve the problem, we propose to evaluate π and spatial +prior, which are crucial for the shape refinement. The accurate estimation of +π can correct the imbalanced label ratios, and enable model to adjust the size +of segmentation result. The computation of spatial prior is able to encode the +feature similarity between pixels, and rectify the shape of target structures. We + +Correction of prediction +Scribble +Left ventricle +Spatial energy +T estimation +<- +Prediction +Under segmentation +Corrected shape +Spatial priors +Class mixture ratios +Right ventricle +Adjusted prediction +4 +Lspatial +(Eq.29)ZScribbleSeg +11 +(a) +(b) +Fig. 5. Two examples of under segmentation, pointed by the red arrows: (a) under +segmented foreground labels from ACDC segmentation, i.e., left ventricle and right +ventricle; (b) under segmented background from MyoPS segmentation. +encode π and spatial prior with spatial prior loss, by ranking the spatial energy +and select the top π ratio as the segmentation. To estimate π, we start from the +imbalanced annotated ratios (referred to as a) and adapt it from labeled pixels +to unlabeled pixels. +Note that the problem of under segmentation can be even worse without +the modeling of efficient scribbles. In the case of manually annotated scribbles, +the resulting annotations may be distributed in a non-random pattern due to +fixed labeling habits, resulting in the biased label distribution across the whole +dataset. This problem could be alleviated by simulating randomly distributed +labels through our proposed supervision augmentation. +Challenges of π estimation: The evaluation of class mixture ratios is a criti- +cal bottleneck in semi-/ weak-/ non-supervised learning, and serves as the basis +of classes identification [14] and variance reduction [47,38]. However, existing +methods are mainly proposed for binary classification, and can not be adapted +to multi-class scenario directly. For segmentation task, the class mixture ratios +are both imbalanced and interdependent, leading to the decrease in the perfor- +mance of previous binary estimation approaches. Despite the class imbalance +problem, the scribble supervised segmentation is also faced with the imbalance +of annotated class ratios. For example, the annotated ratio of the background +tends to be much smaller than that of the foreground classes. The imbalance of +annotated ratio further enhances the difficulty of π estimation. +Estimation of class mixture ratios π To tackle the under segmentation, we +propose to estimate the class mixture ratios within unlabeled pixels. +Objective: We aim to determine π to maximize the likelihood of observed +unlabeled pixels. For nu unlabeled pixels x = [x1, x2, · · · , xnu] sampled from + +12 +K Zhang & X Zhuang +pu(x), the likelihood of these unlabeled pixels is formulated as: +L(π) = +nu +� +i=1 +pu(xi) = +nu +� +i=1 +[ +m +� +k=1 +pu(xi|ck)pu(ck)], +(10) +where pu(xi|ck) represents the within-class probability of class ck ∈ {c0, · · · , cm} +for unlabeled pixel xi. We assume the within-class probabilities of labeled and un- +labeled pixels to be unchanged. Then, we estimate π = [pu(c1), pu(c2), · · · , pu(cm)] +to maximize the likelihood of unlabeled observations in Eq.( 10). +To maximize the likelihood in Eq.(10), we follow the EM algorithm in [24,30] +and introduce the unknown variable s = (s1, s2, · · · , snu), where si is the one- +hot vector of dimension m with the i-th value equals 1. Then, the likelihood +L(π|x, s) is written as: +L(π|x, s) = +nu +� +i=1 +m +� +k=1 +[pu(xi|ck)pu(ck)]sik . +(11) +The log likelihood l(π|x, s) is derived as: +l(π|x, s) = +nu +� +i=1 +m +� +k=1 +sik log(pu(xi|ck)) ++ +nu +� +i=1 +m +� +k=1 +sik log(pu(ck)) +(12) +E-step: The E-step of EM algorithm computes the expected value of l(s|x, π) +given the observations x and current estimate of π[t], +Q(π|x, π[t]) =E +� +l(π|s, x)|x, π[t]� += +nu +� +i=1 +m +� +k=1 +E(sik|xi, π[t] +k ) log(pu(xi|ck)) ++ +nu +� +i=1 +m +� +k=1 +E(sik|xi, π[t] +k ) log(pu(ck)), +(13) +where E(sik|xi, π[t] +k ) is represented as: +E(sik|xi, π[t] +k ) = p(sik = 1|xi, π[t] +k ) = p[t] +u (ck|xi) +(14) +Estimation of p[t] +u (ck|xi): To solve the current estimate of p[t] +u (ck|xi), we aim +to adapt the posteriori probability from labeled pixels to unlabeled pixels. For +labeled pixels, the posteriori probability pl(ck|xi) is estimated by the model +prediction. For class ck and pixel xi, Based on our assumption that the within- +class probabilities of labeled and unlabeled pixels are same, we have +pu(xi|ck) = pl(xi|ck), +(15) + +ZScribbleSeg +13 +Based on Bayes’ theorem, the within-class probabilities of labeled pixel pl(xi|ck) +and unlabeled pixel pu(xi|ck) are written as: +ˆpl(xi|ck) = ˆpl(ck|xi)p(xi) +ˆpl(ck) +(16) +ˆpu(xi|ck) = ˆpu(ck|xi)ˆpu(xi) +ˆpu(ck) +(17) +By substituting ˆpu(xi|ck) in Eq.(17) and ˆpl(xi|ck) in Eq.(16) into Eq.(15), we +adapt the within-class probabilities from labeled pixels to unlabeled pixels as +follows: +ˆpu(ck|xi) = ˆpl(xi) +ˆpu(xi) · ˆpu(ck) +ˆpl(ck) ˆpl(ck|xi). +(18) +For binary estimation, the mixture ratio is independently estimated for each +class, which does not leverage the inter-relationship between classes. For multi- +class segmentation, we naturally utilize the condition that the sum of the prob- +abilities of all classes equals to 1, i.e., +m +� +k=0 +ˆpu(ck|xi) = 1. +(19) +By combing Eq.(18) and Eq.(19), one can obtain: +1 = ˆpl(xi) +ˆpu(xi) +m +� +k=0 +ˆpu(ck) +ˆpl(ck) ˆpl(ck|xi). +(20) +Then, ˆpl(xi)/ˆpu(xi) is represented as: +ˆpl(xi) +ˆpu(xi) = +� m +� +k=0 +[ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)] +�−1 +. +(21) +By substituting ˆpl(xi)/ˆpu(xi) into Eq. (18), we can obtain the formulation of +ˆpu(ck|xi) as follows: +ˆpu(ck|xi) = +ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck) +�m +k=0[ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)]. +(22) +Therefore, the current estimate of posteriori probability ˆpu(ck|xi) is written +as: +ˆpt +u(ck|xi) = +πt +k ˆpl(ck|xi)/ˆpl(ck) +�m +k=0[πt +k ˆpl(ck|xi)/ˆpl(ck)], +(23) +where ˆpl(ck) is empirically evaluated by the class frequency within labeled pixels, +i.e., ˆpl(ck) = nk +l /nl. +M-step: The M-step maximizes Q(π, π[t]) in Eq.(13), i.e., +π[t+1] := arg max +π +Q(π|x, π[t]) +(24) + +14 +K Zhang & X Zhuang +We empirically solve the πt+1 +k +as: +π[t+1] +k += 1 +nu +nu +� +i=1 +p[t] +u (ck|xi) +(25) +The π[t] +k is initialized with the class frequency within labeled pixels a, with +ak = nk +l +nl . Then, the E-step of Eq.(13) and M-step of Eq.(25) is repeated until +the estimation of π converges. The posteriori probability ˆpu(ck|xi) and priori +probability ˆpu(ck) are re-estimated in each iteration. +Discussion: There are two conditions of the proposed algorithm. Firstly, we +assume the within-class probabilities of labeled and unlabeled pixels be the same, +which means the labeled pixels should be randomly sampled based on classes. +Secondly, π is initiated with the class frequency of labeled pixels a. Since that +the annotated ratio of background is smaller than that of the foreground classes, +the priori probabilities of foreground classes within unlabeled pixels tend to +be over-estimated. The first problem can be tackled by modeling the efficient +scribbles, to achieve the random distribution of annotations. For the second +problem, by randomly occluding the image and replace the occluded area with +background, we are able to increase the ratio of background and alleviate this +problem to some extent. Furthermore, we propose to address it with the marginal +probability maximization, which will be explained in Section 2.3. +Computation of spatial energy Given the estimated class mixture ratios, +we aim to identify the unlabeled pixels by determining the probability of pixels +belonging to each class. Instead of using model predictions directly, we further +encode the spatial relationship to compensate the inaccurate results generated +by segmentation network. Inspired by [31], we estimate the spatial energy of +unlabeled pixels with energy term in a dense setting. +Firstly, we use Gaussian kernels Gij to measure the distance between pixels +at position i and j as: +Gij = exp +� +−(pi − pj)2 +2σ2p +− (oi − oj)2 +2σ2o +� +, +(26) +where pi represents the position of pixel xi; oi denotes the color feature; σp and +σo are the bandwidth parameters for position and color information, respectively. +The shallow features like color and position are specific to the pixel and do not +rely on the network prediction. Then, the energy term φij leveraging prediction +ˆy is formulated as: +φij(ˆy) = Gij ˆyiˆyj, +(27) +which denotes the pairwise relationship between two pixels. This energy term +connects every pixels with each other within one image. Based on φi,j, we define +the element of spatial energy Φ in a dense setting, i.e., +Φi(ˆy) = +� +j∈Ωi +φij(ˆy), +(28) + +ZScribbleSeg +15 +where Ωi = {Pos(i) − Pos(j) ≤ r}, means the neighborhood window of radius r. +Instead of taking the total energy as the regularization loss as [31], we consider +Φ as the spatial energy to reflect the relative probability of pixels belonging to +each class. +Spatial prior and shape prior losses Spatial prior loss is computed by +ranking the spatial energy and selecting the top π proportion of pixels as the +segmentation. Considering that adjusting multiple structures directly can be +challenging, we instead separate each foreground class from the others, and +then tackle the individual structure. Given that the mixture ratios of foreground +classes tend to be over-estimated, we instead leverage the accurate negative pix- +els filtered by estimated mixture ratios, and maximize the marginal probability +of these pixels belonging to other classes. +Firstly, by ranking the spatial energy and applying the mixture ratio of each +class, we are able to distinguish negative pixels from unlabeled pixels. For fore- +ground class ck, we rank the unlabeled pixels according to the spatial energy Φk +of class ck in Eq. (28). Given the estimated mixture ratio πk, we set pixels in +the top πk proportion to be positive samples Ωk Correspondingly, the remaining +pixels are taken as negative pixels, denoted as ¯Ωk. Taking over-estimated πk into +account, we believe the set of negative pixels ¯Ωk is more accurate than Ωk. +Secondly, we design the spatial prior loss (Lspatial) based on maximal marginal +probability of negative samples ¯Ωk belonging to other classes. For each class +ck, we take it as foreground and fuse other classes except ck into background. +The fused class is denoted as ¯ck. For pixel xi in ¯Ωk, its marginal probabil- +ity belonging to ¯ck equals the sum of probabilities of the fused classes, i.e., +ˆp(¯ck|xi, xi ∈ ¯Ωk) = �m +k′=1[1[k′̸=k]ˆp(ck|xi)]. To maximize the marginal proba- +bility of negative pixel xi belonging to ¯ck, we formulate the spatial prior loss +as: +Lspatial = − +m +� +k=1 +� +xi∈ ¯ +Ωk +log(ˆp (¯ck|xi)) . +(29) +Shape prior loss is developed to regularize inter-connected structures in the +segmentation results. This loss is adopted to further reduce noise and smooth +boundary. It requires the model prediction to be consistent with its maximum +connected area, and minimizes their cross entropy loss, i.e., +Lshape = − +� +k∈Ψ +F( ˆYk) log( ˆYk), +(30) +where Ψ is the set of label classes with inter-connected structures; F(·) denotes +the morphological function, and outputs the largest inter-connected area of input +label. +2.4 +ZScribbleNet +ZScribbleSeg is achieved via a deep neural network referred to as ZScribbleNet. +ZScribbleNet does not depend on any particular network architecture, and can + +16 +K Zhang & X Zhuang +Table 1. Efficiency analysis of scribble forms for regular structure segmentation of +cardiac ventricles (ACDC dataset) and irregular segmentation of myocardial pathology +(MyoPS dataset). Here, Nscribble and Npix respectively denote the number of manual +draws to generate scribble annotations and number of annotated pixels, which indicate +annotation efforts; k is the number of manual draws (scribbles) and n is the given +threshold of annotation efforts, where k << n. Segmentation results are evaluated on +test set and reported in Dice scores. +Methods +Nscribble Npix +Structural segmentation +Irregular segmentation +LV +MYO +RV +Avg +Scar +Edema +Avg +Points +n +n +.876±.134 .801±.089 .858±.081 .845±.107 .551±.246 .638±.115 .595±.194 +Skeleton +k +n +.805±.145 +.737±.095 +.769±.128 +.770±.126 +.504±.213 +.057±.022 +.281±.271 +Random walk +k +n +.798±.173 +.698±.153 +.753±.157 +.744±.165 +.516±.284 +.529±.123 +.522±.184 +DirRandomWork +k +n +.844±.143 +.755±.102 +.798±.173 +.799±.146 +.539±.217 +.637±.108 +.588±.176 +be directly applied to any CNN backbone. For all experiments, we adopt the +variant of UNet [1] as the backbone of segmentation network. As Figure 2 shows, +two images are mixed together to perform the supervision augmentation. Then, +our ZScribbleNet takes the mixed images and unmixed images as the input, and +output their segmentation results. +For model training, images and their scribble annotations are sampled to +estimate the training objective (L), which is formulated as: +L = Lpce + λ1Lglobal + λ2Lspatial + λ3Lshape +� +�� +� +unsup +, +(31) +where Lpce is the partial cross entropy loss calculated for annotated pixels in +unmixed image and mixed image; the global consistency loss Lglobal in Eq.(9) +requires the mix equivalence for the supervision augmentation; spatial prior loss +Lspatial in Eq.(29) encodes the π prior and spatial prior; shape regularization +loss Lshape in Eq.(30) leverages shape prior; λ1, λ2, λ3 are hyper-parameters to +leverage the relative importance of different loss components. +In the training phase, We warmly started training the networks with partial +cross entropy loss Lpce, global consistency loss Lglobal, and shape regularization +loss Lshape for 100 epochs, and then invoked the spatial loss Lspatial. In the +testing phase, the trained network predicted the segmentation results of input +image directly. +3 +Experiments and Results +We first investigated a variety of scribble forms, and analyzed the principles +of efficient scribbles in Section 3.2. Then, we performed ablation study to the +proposed ZScribbleSeg in Section 3.3. Finally, we demonstrated the performance +of ZScribbleSeg with comparisons to other state-of-the-art methods in various +segmentation tasks using four open datasets in Section 3.4. + +ZScribbleSeg +17 +(a) +(b) +(c) +(d) +Fig. 6. Performance of segmentation networks trained by the Points scribble form +with different number of pixels Npix, with comparisons to fully supervised models +(FullSupUNet): (a) and (c) visualize Dice scores with respect to different Npix on ACDC +and MyoPS, respectively. The performance of models trained by the Random walk +form, with increasing step length l, compared with models trained by DirRandWalk: +(b) and (d) show the Dice scores of segmentation on ACDC and MyoPS, respectively, +given Npix = n. +3.1 +Materials +Tasks and datasets Our validation included four segmentation tasks, including +(1) regular structure segmentation of cardiac ventricles from anatomical imag- +ing using ACDC dataset, (2) regular structure segmentation from pathology en- +hanced imaging with a smaller training size using MSCMRseg dataset, (3) irreg- +ular object segmentation of myocardial pathology from multi-modality imaging +using MyoPS dataset, and human pose segmentation from natural scene images +using PPSS dataset. +ACDC dataset was from the MICCAI’17 Automatic Cardiac Diagnosis +Challenge [4]. This dataset consists of short-axis cardiac images using anatomi- +cal MRI sequence (BSSFP) from 100 patients, with gold standard segmentation +of cardiac ventricular structures, including left ventricle blood cavity (LV), left + +0.85 +0.84 +0.83 +Dice +0.82 +0.81 +Points +FullSupUNet +0.80 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +Number of annotated pixels on ACDC(n)0.81 +0.80 +0.79 +Score +0.78 +Dice +0.77 +0.76 +Random Walk +0.75 +DirRandomWalk +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +4.0 +Stepsizeofrandomwalkon ACDC0.63 +0.62 +Score +0.61 +Dice +0.60 +0.59 +0.58 +Points +0.57 +FullSupUNet +1 +2 +3 +4 +5 +Number of annotated pixels on MyoPS(n)0.59 +0.58 +0.57 +Score +0.56 +Dice +0.55 +0.54 +0.53 +RandomWalk +DirRandomWalk +0.52 +1 +2 +3 +4 +5 +6 +7 +8 +StepsizeofrandomwalkonMvoPs18 +K Zhang & X Zhuang +ventricle myocardium (MYO), and right ventricle blood cavity (RV). For exper- +iments, we randomly divided the 100 subjects into a training set of 70 subjects, +a validation set of 15 subjects (particularly for ablation study), and a test set of +15 subjects. +MSCMRseg was from the MICCAI’19 Multi-sequence Cardiac MR Seg- +mentation Challenge [59,58], consisting of images from 45 patients with car- +diomyopathy and the gold standard segmentation of LV, MYO and RV. We +employed the 45 images of late gadolinium enhanced (LGE) MRI to evaluate +the segmentation of ventricle structures. Following [48], we divided the 45 im- +ages into three sets of 25 (training), 5 (validation), and 15 (test) images for +all experiments. Note that this structure segmentation is more challenging than +that on ACDC due to its smaller training set and pathology enhanced images. +MyoPS dataset was from MICCAI’20 Myocardial pathology segmentation +Challenge [26], consisting of paired images of BSSFP, LGE and T2 cardiac MRI +from 45 patients. The task was to segment the myocardial pathologies, includ- +ing scar and edema, which do not have regular shape or structure thus their +segmentation represents a different task to the regular structure segmentation. +Following the benchmark study [26], we split the data into 20 pairs of training +set, 5 pairs of validation set and 20 pairs of test set. +PPSS refers to the Pedestrian Parsing on Surveillance Scenes (PPSS) dataset [28]. +We employed the task of human pose segmentation to validate the generaliz- +ability of models on natural scene images. PPSS is a large scale human pars- +ing dataset including 3673 annotated samples of 171 surveillance videos. The +ground truth segmentation of eight classes including hair, face, upper clothes, +arms, lower clothes, legs, shoes, and background were provided. We used the +first 100 surveillance scenes for training and the remaining 71 videos for test. +Evaluation metrics For experiments on ACDC, MSCMRseg and MyoPS datasets, +we reported the Dice score and Hausdorff Distance (HD) on each foreground +class separately following the practice of medical image segmentation. On PPSS +dataset, we measured the multi-class Dice scores following [42], where Dice= +2|ˆyy| +|ˆy|+|y|, and ˆy and y denote the multi-channel prediction and ground truth la- +bel, respectively. +Pre-processing and implementation The two dimensional slices from ACDC +and MSCMR datasets were of different resolutions. Hence, we first re-sampled all +images into a fixed resolution of 1.37 × 1.37 mm and then extracted the central +patch of size 212 × 212 for experiments. For MyoPS, we took the paired slices of +BSSFP, LGE, and T2 CMR and cropped their center patches of size 192 × 192 +for experiments. We normalized the intensity of these medical images to be zero +mean and unit variance. For PPSS dataset, we first re-sampled all images into +the same resolution, and then padded the images to the size of 160 × 160. The +intensities of images were normalized to a range between 0 and 1. +For random occlusion, a square area of 32 × 32 was randomly occluded for +each image. For the estimation of spatial energy, We adopted Gaussian kernels + +ZScribbleSeg +19 +with color bandwidth σo = 0.1, position bandwidth σp = 6, and kernel radius +r = 5. The hyper-parameters λ1, λ2, λ3 in Eq. (31) were empirically set to be +0.05, 1, and 1, respectively. +All models were trained with a batch size of 4, learning rate of 1e−4, and +augmentation of flipping and random rotation. We implemented our models +with Pytorch. All models were trained on one NVIDIA 3090Ti 24GB GPU for +1000 epochs. +Table 2. Results in Dice scores and Hausdorff Distance (HD) of the ablation study +using ACDC dataset, where the models were evaluated on the validation set. Note that +model #6 is ZScribbleSeg. Bold denotes the best result, and underline indicates the +best but one in each category. +Results in Dice +Lpce Efficiency Lshape Lglobal Lspatial +LV +MYO +RV +Avg +model #1 +✓ +× +× +× +× +.863±.089 +.804±.063 +.774±.150 +.813±.112 +model #2 +✓ +✓ +× +× +× +.870±.100 +.833±.063 +.843±.076 +.848±.082 +model #3 +✓ +× +✓ +× +× +.915±.068 +.871±.056 +.871±.058 +.886±.064 +model #4 +✓ +✓ +× +✓ +× +.920±.064 +.868±.051 +.886±.051 +.891±.059 +model #5 +✓ +× +× +× +✓ +.923±.078 +.869±.051 +.889±.056 +.894±.066 +model #6 +✓ +✓ +✓ +✓ +✓ +.929±.057 +.876±.051 +.892±.049 +.899±.056 +Results in HD (mm) Lpce Efficiency Lshape Lglobal Lspatial +LV +MYO +RV +Avg +model #1 +✓ +× +× +× +× +81.86±40.40 +65.97±33.62 60.91±44.62 69.58±40.37 +model #2 +✓ +✓ +× +× +× +119.78±19.14 23.90±17.32 52.38±23.40 65.35±45.06 +model #3 +✓ +× +✓ +× +× +4.45±5.39 +15.24±23.90 25.78±22.44 15.16±20.89 +model #4 +✓ +✓ +× +✓ +× +12.12±18.26 +29.41±24.56 16.97±15.62 19.50±20.94 +model #5 +✓ +× +× +× +✓ +28.95±36.57 +44.77±34.69 +7.51±5.34 27.08±32.76 +model #6 +✓ +✓ +✓ +✓ +✓ +6.09±8.53 +11.14±14.53 +8.86±5.88 +8.70±10.40 +3.2 +Efficiency of scribble forms +In this study, we first compared four scribble forms to illustrate the efficacy of +randomly annotated scribbles for supervision. Denoting the number of annotated +pixels using a manual and skeleton-wise scribble form as n, we generated other +scribble forms with the same annotated ratios for a fair comparison. Then, we +studied the performance of segmentation with respect to the number of pixels +annotated using a random and wide range scribble form, by setting the number +of annotated pixels to different times of n. Finally, we further explored variants +of random walk annotations to show the importance of wide range in the random +distribution of scribbles. +We applied two segmentation tasks, i.e., regular structure segmentation of +the cardiac ventricles on ACDC dataset and irregular segmentation of myocardial +pathologies using MyoPS dataset. To compare the supervision of scribble forms +directly, we trained all models with partial cross entropy (PCE) loss calculated +for annotated pixels from scribbles. All experiment results were reported on the +test set. +Scribble forms One can measure the efforts of scribble annotations from two +perspectives, i.e., number of manual draws to generate scribble annotations + +20 +K Zhang & X Zhuang +Table 3. Results and comparisons of regular structure segmentation on ACDC dataset. +These models were evaluated on the test set. +Methods +Dice +HD (mm) +LV +MYO +RV +Avg +LV +MYO +RV +Avg +PCE +.805±.145 +.737±.095 +.769±.128 +.770±.126 62.55±36.04 68.30±27.77 59.62±42.62 +63.40±35.76 +WSL4 [29] +.835±.164 .825±.032 .787±.191 +.792±.166 16.48±16.01 24.48±22.74 18.21±11.30 +19.72±17.67 +GatedCRF [31] .846±.157 +.744±.108 +.822±.111 +.804±.135 37.38±46.37 22.30±15.72 20.88±11.85 +26.85±30.03 +MAAG [42] +.879 +.817 +.752 +.816 +25.23 +26.83 +22.73 +24.93 +CVIR [14] +.866±.127 +.797±.102 +.737±.130 +.800±.130 47.51±50.82 10.70±8.39 +14.39±9.00 +.24.20±34.17 +nnPU [20] +.862±.134 +.792±.124 +.829±.102 +.828±.123 67.28±48.60 18.60±17.93 +14.64±8.39 +33.51±38.43 +CycleMix [52] +.876±.096 +.794±.083 +.829±.099 +.833±.098 16.60±19.90 18.04±17.78 19.09±21.44 +17.91±19.57 +ShapePU [53] +.885±.103 +.806±.096 +.851±.089 +.848±.100 20.17±22.40 41.81±33.40 20.06±26.43 +27.35±29.33 +ZScribbleSeg +.900±.065 .825±.069 .862±.102 .862±.086 7.69±6.94 8.93±6.40 12.74±12.48 9.79±9.19 +FullSupUNet +.882±.123 +.824±.099 +.856±.112 +.854±.113 11.94±13.58 12.65±12.52 +14.82±9.69 +13.14±11.97 +(Nscribble) and number of annotated pixels (Npix). Given the certain amount +of efforts, we designed four forms following different generation procedures, i.e., +(1) Skeleton, (2) Random walk, (3) Directed random walk (DirRandomWalk), +(4) Points, and compared the segmentation performance of models trained us- +ing such scribble annotations for supervision. The details of scribble forms are +described bellow. +Skeleton indicates the widely adopted scribble form by a rater, who approx- +imately outlines the shape of each label class within the segmentation mask. For +a segmentation task with k label classes, including the background, one needs k +manual draws (scribbles) for a training image. For ACDC dataset, we adopted +the manual annotated skeleton scribble released by [42]; while for pathologies +in MyoPS dataset, we generated the skeleton scribbles automatically using the +skeletonization algorithm [36]. We refer the reader to Appendix B of the supple- +mentary material for generation details. +Random walk starts from a random point within the segmentation mask. +Then, the annotation moves along a random direction of image lattice within the +segmentation mask, with a given step length (l by default set to 1). We repeated +such moves until the ratio or number of annotated pixels reached a threshold +(n). +Directed random walk, DirRandomWork for short, is the random walk +with momentum. The scribble generated by Random walk tends to cluster within +a local area of the radius √r given r-step walks. To achieve wide range distri- +bution without manually setting the step length (l), we therefore adopted this +directed random walk, which prefers moving along the same direction to the pre- +vious step. If the next point does not lie in the segmentation mask, we changed +the walking direction to be along the smallest angle to the previous one. +Points scribble form refers to an ideal form, which randomly samples anno- +tated pixels within the segmentation mask. However, it is difficult to generate +such scribble annotation in practice, due to the huge number of manual draws +which equals the number of annotated pixels, i.e., Nscribble = Npix. Therefore, +we considered this form as the upper bound of scribble supervision under the +same ratio of annotated pixels. + +ZScribbleSeg +21 +Image +Ground Truth +PCE +CVIR +nnPU +WSL4 +GatedCRF +CycleMix +ShapePU +ZScribbleSeg FullSupUNet +Dice (Avg) +:MYO +:RV +Median +Worst +:LV +0.758 +0.827 +0.894 +0.852 +0.870 +0.897 +0.902 +0.907 +0.903 +0.486 +0.390 +0.472 +0.628 +0.618 +0.386 +0.262 +0.773 +0.544 +Dice (Avg) +Scribble +Fig. 7. Visualization of cardiac segmentation on ACDC dataset. The two slices were +from the median and the worst cases by the average Dice scores of all compared meth- +ods. +Results Given the same amount of annotated pixels, we show the effect of dif- +ferent scribble forms on regular structures (ACDC) and irregular objects (My- +oPS). As Table 1 illustrates, when the four scribble forms had the same number +of annotated pixels Npix, Points achieved the best Dice scores on both of the +structural segmentation and irregular segmentation tasks, thanks to the effects of +randomness and wide range distribution of scribbles. However, when we limited +the efforts of manual draws to be the same, DirRandomWalk became more favor- +able, as the scribble form of Points could be impractical. Furthermore, Skeleton +scribble was illustrated to be the least efficient form, particularly the segmenta- +tion network trained on such dataset performed poorly on the irregular object +segmentation task. This was probably due to the fact that when the target was +difficult to outline, Skeleton form could fail to portray the entire segmentation, +leading to poor performance or even a failure in training the segmentation net- +works. On the contrary, randomly distributed scribble forms, such as Random +walk and DirRandomWalk, demonstrated their superiority, particularly on the +irregular object segmentation with remarkable improvements on average Dice +over Skeleton of 24.1% and 30.7%, respectively. +Number of annotated points: By varying the number of annotated pixels +(Npix), we validated the influence of annotated proportions on scribble super- +vised segmentation. As shown in Figure 6 (a) and (c), the model performance +tended to be improved as Npix increases, indicating that model training bene- +fited from larger proportion of annotated pixels. One can observe from Figure 6 +(a) that the segmentation performance started converging when Npix reached +2n. By contrast, for the more difficult segmentation task on irregular objects, as +Figure 6 (c) illustrates, the model performance converged after Npix ≥ 4n. +Wide-ranged distribution: We further investigated the influence of wide +range distribution of scribbles, by training networks with varying step length l +in Random walk. As the step length increases, the label distribution range of +Random walk gradually expanded. From Figure 6 (b) and (d), one can see that +the segmentation performance of average Dice scores was improved as the step +length increased, and the performance gradually converged to that of DirRan- +domWalk. This confirmed that the widely distributed scribbles were better to +provide finer supervision under the same number of draws and annotated pixels. + +22 +K Zhang & X Zhuang +Table 4. Results and comparisons of regular structure segmentation on pathology +enhanced images (LGE CMR) using MSCMRseg dataset. +Methods +Dice +HD (mm) +LV +MYO +RV +Avg +LV +MYO +RV +Avg +PCE +.514±.078 +.582±.067 +.058±.023 +.385±.243 +259.4±14.19 +228.1±21.36 +257.4±12.43 +248.3±21.63 +WSL4 [29] +.902±.040 +.815±.033 +.828±.101 +.848±.076 +55.95±4.88 +42.07±13.48 +32.08±6.57 +43.37±31.04 +GatedCRF [31] .917±.044 +.825±.032 +.848±.073 +.863±.066 +25.72±4.37 +37.92±5.10 +32.83±5.59 +32.16±7.11 +CVIR [14] +.331±.076 +.371±.088 +.404±.110 +.368±.095 +259.2±14.23 +243.0±13.76 +180.9±55.44 +227.7±47.63 +nnPU [20] +.341±.067 +.538±.081 +.432±.100 +.437±.115 +259.4±14.19 +201.6±66.98 +199.7±57.50 +220.2±57.70 +CycleMix [52] +.748±.064 +.730±.047 +.835±.041 +.771±.069 224.59±35.27 28.26±20.77 +73.36±51.39 108.74±92.65 +ShapePU [53] +.880±.046 +.785±.080 +.833±.087 +.833±.082 178.02±50.93 178.05±25.39 189.35±55.78 181.81±45.27 +ZScribbleSeg +.922±.039 .834±.039 .854±.055 .870±.058 12.10±14.70 16.52±19.14 51.03±39.27 26.55±31.39 +FullSupUNet +.909±.049 +.821±.054 +.826±.087 +.852±.076 +10.02±12.36 +11.89±11.34 +56.91±41.99 +26.27±33.63 +3.3 +Ablation study +We studied the effectiveness of the proposed strategies in modeling efficient scrib- +bles and prior regularization for ZScribbleNet. We used the ACDC dataset and +the expert-made scribble annotations released by [42], and evaluated the model +performance on the validation set. We compared six ablated models which were +trained with or without the usage of modeling efficient scribbles (denoted as +Efficiency), and with different combinations of the four loss functions, i.e., the +partial cross entropy (Lpce), the global consistency loss (Lglobal) in Eq.(9), the +spatial prior loss (Lspatial) in Eq.(29), and the shape prior loss (Lshape) in Eq.(30). +Table 2 presents the results. When model #2 adopted the proposed super- +vision augmentation to model efficient scribbles (indicated by the column of +Efficiency), its performance was improved compared to model #1, as one can +see from their average Dice scores (0.848 vs. 0.813) and average HDs (65.35 +mm vs. 69.58 mm). This demonstrated the benefits of model training from the +augmented supervision. When combining the supervision augmentation with the +global consistency loss (Lglobal), leading to model #4, the performance was fur- +ther boosted with remarkable improvements, namely 4.3% gain in Dice (0.891 +vs. 0.848) and over 45 mm error reduction in HD (19.50 mm vs. 65.35 mm). Al- +ternatively, when leveraging inter connectivity via the shape regularization loss +(Lshape), model #3 obtained an overwhelming improvement in HD, which was +reduced from 69.58 mm to only 15.16 mm compared to model #1. This indicated +the results were with much less noisy and outlier segmentation. We then further +investigated the advantage of spatial prior (Lspatial) in training ZScribbleNet. +One can see from the result of model #5 that it achieved the most evident gain +in terms of Dice results, with an improvement of 8.1% (0.894 vs. 0.813) by solely +including one extra loss. Finally, our ZScribbleSeg (model #6) achieved the best +performance with average Dice of 0.899 and HD of 8.70 mm. This indicated that +the combination of efficient scribbles and priors endowed the algorithm with sub- +stantial supervision and prior knowledge for scribble-supervised segmentation. +3.4 +Performance and Comparisons +We conducted experiments over the four segmentation tasks stated in Sec- +tion 3.1. (1) For the structural segmentation of cardiac ventricles from ACDC + +ZScribbleSeg +23 +Image +Ground Truth +PCE +CVIR +nnPU +ShapePU +WSL4 +GatedCRF +CycleMix +ShapePU +ZScribbleSeg +FullSupUNet +Dice (Avg) +:MYO +:RV +Median +Worst +:LV +0.389 +0.353 +0.412 +0.886 +0.885 +0.787 +0.865 +0.893 +0.880 +0.370 +0.328 +0.428 +0.723 +0.814 +0.735 +0.723 +0.829 +0.830 +Scribble +Dice (Avg) +Fig. 8. Visualization of cardiac segmentation on LGE CMR using MSCMRseg dataset. +The two slices were from the median and the worst cases by the average Dice scores of +all compared methods. +dataset, we used the expert-made scribbles released by [42]. (2) For the car- +diac structural segmentation from pathology enhanced imaging (MSCMRseg) +dataset, we used the manually annotated scribbles released by [52]. (3) For +the irregular myocardial pathology segmentation from MyoPS dataset, we first +adopted the standard skeletonization algorithm for the simulated scribble anno- +tation of pathologies [36]. Then, we manually annotated skeleton scribbles for +the structures of LV, Myo, RV and background. (4) For the human pose seg- +mentation from PPSS dataset, we adopted the scribble annotations generated +by the standard skeletonization algorithm [36]. +We compared ZScribbleSeg with eight to nine methods. We first implemented +the PCE loss (Lpce) as a baseline method (referred to PCE). Then, we imple- +mented four state-of-the-art (SOTA) scribble supervised segmentation methods, +i.e., WSL4 [29], GatedCRF [31], CycleMix [52], and ShapePU [53] to run the +same experiments. We cited the ACDC and PPSS results reported in [42] for the +MAAG method, which is also a SOTA method for this task. Furthermore, we +adopted two semi-supervised SOTA methods based on positive unlabeled learn- +ing, i.e., CVIR [14] and nnPU [20], and re-implemented to adapt them for the +scribble-supervised segmentation tasks. For more details of adaptation, the read- +ers are referred to Appendix C of the supplementary material. Finally, we trained +UNet with full annotations as a baseline of fully-supervised approach (referred +to as FullSupUNet). Note that the post-processing steps of all experiments were +removed for a fair comparison. +Structure segmentation from anatomical images Table 3 presents the +Dice and HD results of 10 approaches for regular structure segmentation of car- +diac ventricles from ACDC dataset. One can observe that ZScribbleSeg achieved +average Dice of 0.862 and HD of 9.79 mm, outperforming the other scribble- +supervised methods evidently. The quantitative results of ZScribbleSeg were +comparable to (or slightly better than) that of the fully supervised method (Full- +supUNet) whose average Dice and HD are 0.854 and 13.14 mm, respectively. +Particularly, the HD results of ZScribbleSeg (9.79 mm) and FullSupUNet +(13.14 mm) were evidently much better than the other methods. Note that HD +is highly sensitive to the noisy and outlier segmentation results, which are com- +monly seen when the supervision of global shape information is not sufficient. + +24 +K Zhang & X Zhuang +Table 5. Results and comparisons of irregular segmentation of myocardial pathologies +on MyoPS dataset. +Methods +Dice +HD (mm) +Scar +Edema +Avg +Scar +Edema +Avg +PCE +0.504±0.213 +0.057±0.022 +0.281±0.271 +82.68±33.95 147.61±20.59 115.15±43.00 +WSL4 [29] +0.031±0.029 +0.106±0.033 +0.069±0.049 172.37±45.13 170.05±20.44 171.20±34.60 +GatedCRF [31] 0.020±0.013 +0.042±0.020 +0.031±0.019 173.60±44.98 170.10±20.44 171.8±34.53 +CVIR [14] +0.505±0.214 +0.080±0.031 +0.293±0.263 +61.59±32.09 125.27±20.83 93.43±41.86 +nnPU [20] +0.530±0.241 +0.085±0.035 +0.308±0.282 +48.88±23.55 125.27±20.83 87.07±44.47 +CycleMix [52] +0.550±0.237 +0.626±0.124 +0.588±0.191 +65.64±42.81 +81.97±40.87 +73.81±42.13 +ShapePU [53] +0.558±0.237 +0.615±0.144 +0.587±0.205 +57.33±31.58 +53.00±31.42 +55.16±31.17 +ZScribbleSeg +0.596±0.237 0.676±0.113 0.636±0.188 46.73±20.04 47.05±24.30 46.89±21.98 +FullSupUNet +0.607±0.253 +0.659±0.135 +0.633±0.202 +55.35±35.73 +63.53±33.15 +59.44±34.27 +Ground Truth +PCE +CVIR +nnPU +ShapePU +CycleMix +ZScribbleNet +FullSupUNet +Dice (Scar); +Dice (Edema) +0.488; +0.039 +0.478; +0.054 +0.667; +0.062 +0.591; +0.597 +0.558; +0.616 +0.671; +0.637 +0.716; +0.713 +Median cases +:Scar +:Edema ++ +( +) +Dice (Scar); +Dice (Edema) +0.563; +0.042 +0.564; +0.061 +0.677; +0.059 +0.726; +0.684 +0.707; +0.698 +0.755; +0.750 +0.705; +0.686 +Image +GatedCRF +0.041; +0.074 +0.028; +0.056 +WSL4 +0.041; +0.180 +0.028; +0.101 +Scribble +Fig. 9. Visualization of irregular segmentation of myocardial pathologies on MyoPS +dataset. The two slices were from the median cases by average Dice scores of edema or +scar segmentation of all compared methods. +The results indicate the proposed efficient scribble modeling and prior regular- +ization were able to alleviate the problem of inadequate supervision and incom- +plete shape information from training images with scribble annotations. Finally, +Figure 7 visualizes two typical cases (median and worst) for illustration. +Structure segmentation from pathology enhanced images The anatomi- +cal segmentation from pathology enhanced images, i.e., LGE CMR of MSCMRseg +dataset, was a more challenging task compared to that of ACDC dataset. This +is because MSCMRseg was a smaller dataset (e.g.: 25 vs. 70 training subjects), +and the image quality and appearance pattern of LGE CMR could be worse and +more complex. +Table 4 provides the quantitative results, and Figure 8 visualizes two special +examples (median and worst) for demonstration. ZScribbleSeg achieved promis- +ing performance and better Dice and HD results than the other SOTA methods +for scribble supervised segmentation. Notice that for this particular challenging +task, the two general semi-supervised segmentation methods, i.e., CVIR and +nnPU, could not work properly, which was confirmed by the two failed segmen- +tation examples visualized in Figure 8. +Finally, similar to the results in previous study (Section 3.4), ZScribbleSeg +and FullSupUNet could achieve less noisy segmentation, affirmed by the remark- +able better HD results in Table 4. Hence, we second to the conclusion that the +proposed ZScribbleNet received greatly augmented supervision and global shape +information via the proposed efficient scribble modeling and prior regularization. + +CCCCCCCZScribbleSeg +25 +Irregular segmentation For segmentation of objects with heterogeneous shape +features, it becomes particularly challenging to learn the accurate shape infor- +mation for inference. We evaluated ZScribbleSeg on such challenging task of ir- +regular segmentation using myocardial pathology segmentation (MyoPS), where +we removed the shape regularization term Lshape due to the nature of pathologies +lacking such property. +Table 5 shows the performance in detail, and Figure 9 visualizes two typical +cases, i.e., median cases by average Dice scores of edema and scar segmenta- +tion, respectively. One can find that the advantages of the proposed methodolo- +gies were demonstrated evidently in such challenging task, as the performance +gains, either in terms of Dice or HD, were significant from CycleMix, ShapePU +and finally to ZScribbleSeg compared to PCE, WSL4, GatedCRF, CVIR and +nnPU (p < 0.001). In fact, the scribble-supervised segmentation of edema by +the compared five methods were failed, and so were the segmentation of scar +for WSL4 and GatedCRF. This is illustrated in the visualized examples in Fig- +ure 9. Although WSL4 and GatedCRF worked well, with scribble supervision, in +the above two regular structure segmentation tasks, they suffered severely from +noisy labels due to their dependence of training on pseudo labels, which leads to +the failure of model training. Furthermore, due to the similar texture between +edema and surrounding tissues in all imaging modalities, it could be extremely +difficult to segment such pathology relying solely on training images without ro- +bust estimation and regularization of class mixture ratios. One can see from the +result that this failed all the five compared methods in edema segmentation. By +contrast, ShapePU and ZScribbleSeg succeeded in this task thanks to their own +methods of estimating class prior π and applying spatial regularization, which +is affirmed by the fact that they both achieved good HDs comparable to that +of FullSupUNet for scar and edema segmentation. Notice that CycleMix did not +illustrate such good performance in terms of HDs, but it achieved comparable +good Dice scores thanks to the adoption of supervision augmentation. +Segmentation from natural scenes We further validated the broad utility +of ZScribbleSeg on the human pose segmentation task of natural scene images. +We applied all the methods on the PPSS dataset, which consists of pedestrian +images with occlusions, generated by different cameras with different resolutions. +Table 6 presents the details, together with the summarized results from pre- +vious three studies, i.e., ACDC, MSCMRseg and MyoPS. Similar to the three +medical image segmentation tasks, the model of ZScribbleSeg generalized well to +this 3-channel colored natural image segmentation task, with the performance +comparable to FullSupUNet and Dice accuracy setting new state of the art for +scribble supervised segmentation. +Figure 10 visualizes three special cases, i.e., the best, median and the worst +cases according to the average Dice by all compared methods. One can see from +the figures that ZScribbleNet performed robustly and generated realistic seg- +mentation with less noisy results, particularly compared with other scribble su- +pervised methods and the fully supervised one (FullSupUNet). + +26 +K Zhang & X Zhuang +Image +Ground Truth +PCE +CVIR +nnPU +ShapePU +CycleMix +ZScribbleSeg +FullSupUNet +WSL4 +Best case +Median case +Worst case +0.721 +0.736 +0.688 +0.699 +0.745 +0.708 +0.781 +0.690 +0.821 +0.795 +0.814 +0.791 +0.817 +0.832 +0.860 +0.862 +0.869 +0.795 +0.871 +0.868 +0.885 +0.898 +0.908 +0.914 +Dice (Avg) +Scribble +Dice (Avg) +Dice (Avg) +Fig. 10. Visualization of results on PPSS dataset. The selected subjects were the best, +median and worst cases by the average Dice scores of all compared methods. +Table 6. Dice results of the 10 methods on the four datasets. Note that sizes of training +sets are given in the brackets. +Methods +ACDC +MSCMRseg +MyoPS +PPSS +(70) +(25) +(20) +(2828) +PCE +.770±.126 +.385±.243 +.281±.271 +.805±.063 +WSL4 [29] +.792±.166 +.848±.076 +- +.762±.045 +GatedCRF [31] .804±.135 +.825±.032 +- +- +MAAG [42] +.816 +- +- +.746 +CVIR [14] +.800±.130 +.368±.095 +.293±.263 +.809±.054 +nnPU [20] +.828±.123 +.437±.115 +.308±.282 +.794±.055 +CycleMix [52] +.833±.098 +.771±.069 +.588±.191 +.835±.050 +ShapePU [53] +.848±.100 +.833±.082 +.587±.205 +.823±.055 +ZScribbleSeg +.862±.086 .870±.058 .636±.188 .838±.050 +FullSupUNet +.854±.113 +.852±.076 +.633±.202 +.843±.071 +4 +Conclusion +In this work, we have presented a new framework for scribble-supervised segmen- +tation, i.e., ZScribbleSeg, to integrate the efficient scribbles and prior regulariza- +tion with implementation of a deep neural network (ZScribbleNet). ZScribbleSeg +exploits the principles of ”good scribble annotations”, and effectively augments +the scribble supervision of ZScribbleNet, via mixup-occlusion operations and + +ZScribbleSeg +27 +global consistency regularization. Then, we explored to capture the global in- +formation by incorporating the prior information, particularly with proposals +of spatial prior loss and shape prior loss. The spatial prior loss was based on +the estimated spatial energy and label class mixture proportions π. The former +provides a new means to identify the probability of unlabeled pixels belonging to +each class without directly using model predictions; and the latter was developed +based on a novel estimation method and was aimed to correct the problematic +prediction via the regularization of spatial prior loss. +To examine to performance of ZScribbleSeg, we investigated a variety of seg- +mentation tasks, including regular structural segmentation of cardiac ventricles +from anatomical imaging data (using ACDC dataset), regular structural segmen- +tation of pathology enhanced imaging data (MSCMRseg), irregular object seg- +mentation from multi-modality imaging (MyoPS), and human pose segmentation +from natural scenario (PPSS). Compared to others approaches, ZScribbleSeg has +shown great competence and achieved comparable performance to the fully su- +pervised UNet method. Particularly, thanks to the augmented supervision and +prior regularization, ZScribbleSeg performed well and demonstrated reliability +and generalizability in the scenarios with small training set (MSCMRseg task) +and irregular structure segmentation (MyoPS task), both of which failed several +other state-of-the-art approaches. +References +1. Baumgartner, C.F., Koch, L.M., Pollefeys, M., Konukoglu, E.: An exploration of +2d and 3d deep learning techniques for cardiac mr image segmentation. In: Inter- +national Workshop on Statistical Atlases and Computational Models of the Heart. +pp. 111–119. Springer (2017) +2. Bearman, A., Russakovsky, O., Ferrari, V., Fei-Fei, L.: What’s the point: Semantic +segmentation with point supervision. In: European conference on computer vision. +pp. 549–565. Springer (2016) +3. Bekker, J., Davis, J.: Estimating the class prior in positive and unlabeled data +through decision tree induction. In: Proceedings of the AAAI Conference on Arti- +ficial Intelligence. vol. 32 (2018) +4. Bernard, O., Lalande, A., Zotti, C., Cervenansky, F., Yang, X., Heng, P.A., Cetin, +I., Lekadir, K., Camara, O., Gonzalez Ballester, M.A., Sanroma, G., Napel, S., +Petersen, S., Tziritas, G., Grinias, E., Khened, M., Kollerathu, V.A., Krishna- +murthi, G., Roh´e, M.M., Pennec, X., Sermesant, M., Isensee, F., J¨ager, P., Maier- +Hein, K.H., Full, P.M., Wolf, I., Engelhardt, S., Baumgartner, C.F., Koch, L.M., +Wolterink, J.M., Iˇsgum, I., Jang, Y., Hong, Y., Patravali, J., Jain, S., Humbert, O., +Jodoin, P.M.: Deep learning techniques for automatic mri cardiac multi-structures +segmentation and diagnosis: Is the problem solved? IEEE Transactions on Medical +Imaging 37(11), 2514–2525 (2018). https://doi.org/10.1109/TMI.2018.2837502 +5. Bishop, C.M.: Training with noise is equivalent to tikhonov regularization. Neural +computation 7(1), 108–116 (1995) +6. Bishop, C.M., Nasrabadi, N.M.: Pattern recognition and machine learning, vol. 4. +Springer (2006) + +28 +K Zhang & X Zhuang +7. Can, Y.B., Chaitanya, K., Mustafa, B., Koch, L.M., Konukoglu, E., Baumgart- +ner, C.F.: Learning to segment medical images with scribble-supervision alone. In: +DLMIA/ML-CDS@MICCAI (2018) +8. Chaitanya, K., Karani, N., Baumgartner, C.F., Becker, A., Donati, O., Konukoglu, +E.: Semi-supervised and task-driven data augmentation. In: International confer- +ence on information processing in medical imaging. pp. 29–41. Springer (2019) +9. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Se- +mantic image segmentation with deep convolutional nets, atrous convolution, and +fully connected crfs. IEEE transactions on pattern analysis and machine intelli- +gence 40(4), 834–848 (2017) +10. DeVries, T., Taylor, G.W.: Improved regularization of convolutional neural net- +works with cutout. arXiv preprint arXiv:1708.04552 (2017) +11. Du Plessis, M., Niu, G., Sugiyama, M.: Convex formulation for learning from pos- +itive and unlabeled data. In: International conference on machine learning. pp. +1386–1394. PMLR (2015) +12. Du Plessis, M.C., Niu, G., Sugiyama, M.: Analysis of learning from positive and +unlabeled data. Advances in neural information processing systems 27, 703–711 +(2014) +13. Gao, S., Zhuang, X.: Robust approximations of low-rank minimization for tensor +completion. Neurocomputing 379, 319–333 (2020) +14. Garg, S., Wu, Y., Smola, A.J., Balakrishnan, S., Lipton, Z.: Mixture proportion +estimation and pu learning: A modern approach. Advances in Neural Information +Processing Systems 34 (2021) +15. Huang, Z., Wang, X., Wang, J., Liu, W., Wang, J.: Weakly-supervised semantic +segmentation network with deep seeded region growing. In: Proceedings of the +IEEE conference on computer vision and pattern recognition. pp. 7014–7023 (2018) +16. Ji, Z., Shen, Y., Ma, C., Gao, M.: Scribble-based hierarchical weakly supervised +learning for brain tumor segmentation. In: International Conference on Medical Im- +age Computing and Computer-Assisted Intervention. pp. 175–183. Springer (2019) +17. Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: +Weakly supervised instance and semantic segmentation. In: Proceedings of the +IEEE conference on computer vision and pattern recognition. pp. 876–885 (2017) +18. Kim, J.H., Choo, W., Song, H.O.: Puzzle mix: Exploiting saliency and local statis- +tics for optimal mixup. In: International Conference on Machine Learning (ICML) +(2020) +19. Kim, J., Choo, W., Jeong, H., Song, H.O.: Co-mixup: Saliency guided joint mixup +with supermodular diversity. In: International Conference on Learning Represen- +tations (2021) +20. Kiryo, R., Niu, G., du Plessis, M.C., Sugiyama, M.: Positive-unlabeled learning +with non-negative risk estimator. In: Advances in Neural Information Processing +Systems. vol. 30 (2017) +21. Koch, L.M., Rajchl, M., Bai, W., Baumgartner, C.F., Tong, T., Passerat-Palmbach, +J., Aljabar, P., Rueckert, D.: Multi-atlas segmentation using partially annotated +data: methods and annotation strategies. IEEE transactions on pattern analysis +and machine intelligence 40(7), 1683–1696 (2017) +22. Kohl, S., Romera-Paredes, B., Meyer, C., De Fauw, J., Ledsam, J.R., Maier-Hein, +K., Eslami, S., Jimenez Rezende, D., Ronneberger, O.: A probabilistic u-net for +segmentation of ambiguous images. Advances in neural information processing sys- +tems 31 (2018) +23. Laine, S., Aila, T.: Temporal ensembling for semi-supervised learning. arXiv +preprint arXiv:1610.02242 (2016) + +ZScribbleSeg +29 +24. Latinne, P., Saerens, M., Decaestecker, C.: Adjusting the outputs of a classifier +to new a priori probabilities may significantly improve classification accuracy: evi- +dence from a multi-class problem in remote sensing. In: ICML. vol. 1, pp. 298–305 +(2001) +25. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436–444 +(2015) +26. Li, L., Wu, F., Wang, S., Luo, X., Martin-Isla, C., Zhai, S., Zhang, J., Liu, Y., +Zhang, Z., Ankenbrand, M.J., et al.: Myops: A benchmark of myocardial pathology +segmentation combining three-sequence cardiac magnetic resonance images. arXiv +preprint arXiv:2201.03186 (2022) +27. Lin, D., Dai, J., Jia, J., He, K., Sun, J.: Scribblesup: Scribble-supervised convolu- +tional networks for semantic segmentation. In: Proceedings of the IEEE conference +on computer vision and pattern recognition. pp. 3159–3167 (2016) +28. Luo, P., Wang, X., Tang, X.: Pedestrian parsing via deep decompositional network. +In: Proceedings of the IEEE international conference on computer vision. pp. 2648– +2655 (2013) +29. Luo, X., Hu, M., Liao, W., Zhai, S., Song, T., Wang, G., Zhang, S.: Scribble- +supervised medical image segmentation via dual-branch network and dynamically +mixed pseudo labels supervision. In: Medical Image Computing and Computer +Assisted Intervention (2022) +30. McLachlan, G.J., Krishnan, T.: The EM algorithm and extensions. John Wiley & +Sons (2007) +31. Obukhov, A., Georgoulis, S., Dai, D., Gool, L.V.: Gated crf loss for weakly super- +vised semantic image segmentation. ArXiv abs/1906.04651 (2019) +32. Obukhov, A., Georgoulis, S., Dai, D., Van Gool, L.: Gated crf loss for weakly +supervised semantic image segmentation. arXiv preprint arXiv:1906.04651 (2019) +33. Ouali, Y., Hudelot, C., Tami, M.: Semi-supervised semantic segmentation with +cross-consistency training. In: Proceedings of the IEEE/CVF Conference on Com- +puter Vision and Pattern Recognition. pp. 12674–12684 (2020) +34. Papandreou, G., Chen, L.C., Murphy, K.P., Yuille, A.L.: Weakly-and semi- +supervised learning of a deep convolutional network for semantic image segmenta- +tion. In: Proceedings of the IEEE international conference on computer vision. pp. +1742–1750 (2015) +35. Pathak, D., Shelhamer, E., Long, J., Darrell, T.: Fully convolutional multi-class +multiple instance learning. arXiv preprint arXiv:1412.7144 (2014) +36. Rajchl, M., Koch, L.M., Ledig, C., Passerat-Palmbach, J., Misawa, K., Mori, K., +Rueckert, D.: Employing weak annotations for medical image analysis problems. +arXiv preprint arXiv:1708.06297 (2017) +37. Ramaswamy, H., Scott, C., Tewari, A.: Mixture proportion estimation via kernel +embeddings of distributions. In: International conference on machine learning. pp. +2052–2060. PMLR (2016) +38. Sakai, T., Plessis, M.C., Niu, G., Sugiyama, M.: Semi-supervised classification +based on classification from positive and unlabeled data. In: International con- +ference on machine learning. pp. 2998–3006. PMLR (2017) +39. Tajbakhsh, N., Jeyaseelan, L., Li, Q., Chiang, J.N., Wu, Z., Ding, X.: Embracing +imperfect datasets: A review of deep learning solutions for medical image segmen- +tation. Medical Image Analysis 63, 101693 (2020) +40. Tang, M., Perazzi, F., Djelouah, A., Ayed, I.B., Schroers, C., Boykov, Y.: On +regularized losses for weakly-supervised cnn segmentation. In: ECCV (2018) + +30 +K Zhang & X Zhuang +41. Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged +consistency targets improve semi-supervised deep learning results. In: Guyon, I., +Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, +R. (eds.) Advances in Neural Information Processing Systems. vol. 30. Curran +Associates, Inc. (2017) +42. Valvano, G., Leo, A., Tsaftaris, S.A.: Learning to segment from scribbles using +multi-scale adversarial attention gates. IEEE Transactions on Medical Imaging +pp. 1–1 (2021). https://doi.org/10.1109/TMI.2021.3069634 +43. Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., Ben- +gio, Y.: Manifold mixup: Better representations by interpolating hidden states. In: +International Conference on Machine Learning. pp. 6438–6447. PMLR (2019) +44. Wang, D., Zhang, Y., Zhang, K., Wang, L.: Focalmix: Semi-supervised learning +for 3d medical image detection. In: Proceedings of the IEEE/CVF Conference on +Computer Vision and Pattern Recognition. pp. 3951–3960 (2020) +45. Wang, W., Sun, G., Van Gool, L.: Looking beyond single images for weakly su- +pervised semantic segmentation learning. IEEE Transactions on Pattern Analysis +and Machine Intelligence (2022) +46. Wang, Y., Zhang, J., Kan, M., Shan, S., Chen, X.: Self-supervised equivariant +attention mechanism for weakly supervised semantic segmentation. In: Proceedings +of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. +12275–12284 (2020) +47. Wu, F., Zhuang, X.: Minimizing estimated risks on unlabeled data: A new for- +mulation for semi-supervised medical image segmentation. IEEE Transactions on +Pattern Analysis and Machine Intelligence (2022) +48. Yue, Q., Luo, X., Ye, Q., Xu, L., Zhuang, X.: Cardiac segmentation from lge mri +using deep neural network incorporating shape and spatial priors. In: International +Conference on Medical Image Computing and Computer-Assisted Intervention. pp. +559–567. Springer (2019) +49. Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization +strategy to train strong classifiers with localizable features. In: International Con- +ference on Computer Vision (ICCV) (2019) +50. Zhang, B., Xiao, J., Jiao, J., Wei, Y., Zhao, Y.: Affinity attention graph neural net- +work for weakly supervised semantic segmentation. IEEE Transactions on Pattern +Analysis and Machine Intelligence (2021) +51. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical +risk minimization. International Conference on Learning Representations (2018), +https://openreview.net/forum?id=r1Ddp1-Rb +52. Zhang, K., Zhuang, X.: Cyclemix: A holistic strategy for medical image segmen- +tation from scribble supervision. In: Proceedings of the IEEE/CVF Conference on +Computer Vision and Pattern Recognition. pp. 11656–11665 (2022) +53. Zhang, K., Zhuang, X.: Shapepu: A new pu learning framework regularized +by global consistency for scribble supervised cardiac segmentation. In: Medical +Image Computing and Computer Assisted Intervention (2022) +54. Zhang, P., Zhong, Y., Li, X.: Accl: Adversarial constrained-cnn loss for weakly +supervised medical image segmentation (2020) +55. Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, +C., Torr, P.H.: Conditional random fields as recurrent neural networks. In: Pro- +ceedings of the IEEE international conference on computer vision. pp. 1529–1537 +(2015) + +ZScribbleSeg +31 +56. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features +for discriminative localization. In: Proceedings of the IEEE conference on computer +vision and pattern recognition. pp. 2921–2929 (2016) +57. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation +using cycle-consistent adversarial networks. In: Proceedings of the IEEE interna- +tional conference on computer vision. pp. 2223–2232 (2017) +58. Zhuang, X.: Multivariate mixture model for cardiac segmentation from multi- +sequence mri. In: MICCAI (2016) +59. Zhuang, X.: Multivariate mixture model for myocardial segmentation combining +multi-source images. IEEE Transactions on Pattern Analysis and Machine Intelli- +gence 41(12), 2933–2946 (2019). https://doi.org/10.1109/TPAMI.2018.2869576 +60. Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart +segmentation of mri. Medical image analysis 31, 77–87 (2016) + diff --git a/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/load_file.txt b/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..9fc1005f0c0e7c7db629e4a411d56bfb6b78df3f --- /dev/null +++ b/BNE4T4oBgHgl3EQfFAx2/content/tmp_files/load_file.txt @@ -0,0 +1,1929 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf,len=1928 +page_content='ZScribbleSeg: Zen and the Art of Scribble Supervised Medical Image Segmentation Ke Zhang and Xiahai Zhuang ⋆ School of Data Science, Fudan University, Shanghai zxh@fudan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='cn Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Curating a large scale fully-annotated dataset can be both labour-intensive and expertise-demanding, especially for medical images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To alleviate this problem, we propose to utilize solely scribble anno- tations for weakly supervised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Existing solutions mainly leverage selective losses computed solely on annotated areas and gen- erate pseudo gold standard segmentation by propagating labels to ad- jacent areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, these methods could suffer from the inaccurate and sometimes unrealistic pseudo segmentation due to the insufficient supervision and incomplete shape features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Different from previous ef- forts, we first investigate the principle of ”good scribble annotations”, which leads to efficient scribble forms via supervision maximization and randomness simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Furthermore, we introduce regularization terms to encode the spatial relationship and shape prior, where a new formu- lation is developed to estimate the mixture ratios of label classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These ratios are critical in identifying the unlabeled pixels for each class and correcting erroneous predictions, thus the accurate estimation lays the foundation for the incorporation of spatial prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, we integrate the efficient scribble supervision with the prior into a unified framework, denoted as ZScribbleSeg, and apply the method to multiple scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Leveraging only scribble annotations, ZScribbleSeg set new state-of-the- arts on four segmentation tasks using ACDC, MSCMRseg, MyoPS and PPSS datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Keywords: Medical Image Segmentation· Scribble Supervision· Mix- ture Model· Medical Image Analysis In recent years, deep neural networks has demonstrated its potential on various visual tasks [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, the success of these methods relies on massive anno- tations, which require assiduous manual efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For medical imaging, the dense manual labeling can take several hours to annotate just one image for experi- enced doctors, which is both expensive and expertise-demanding [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Humor- ous efforts have contributed to the area of training segmentation networks with weaker annotations [39], including scribbles [27], bounding boxes [34], points [2], and image-level labels [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Numerous studies have been reported utilizing only ⋆ Xiahai Zhuang is corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This work was funded by the National Nat- ural Science Foundation of China (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 61971142 and 62111530195).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04882v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='CV] 12 Jan 2023 2 K Zhang & X Zhuang image-level labels [15,46,50,45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These methods mainly rely on large-scale train- ing datasets, and tend to underperform on small medical image datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' On the contrary, scribbles are suitable for labeling nested structures and easy to obtain in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Several works have demonstrated their potential on both semantic and medical image segmentation [17,21,27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Therefore, we propose to investigate this specific form of weakly supervised segmentation, which only uses scribble annotations for model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Conventionally, scribble annotations are mainly focused on delineating the structure of interests [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This can be effective in segmenting regular structures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', the targets with fixed shape patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Hence, this task is also referred to as regular structure segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, such methods could be challenged when they were applied to portray the irregular targets with heterogeneous dis- tributions, such as pathologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This is also referred to as irregular (object) seg- mentation, which is particularly challenging for the medical tasks with small training datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Existing scribble learning approaches mainly aim to recon- struct complete labels from scribbles, and use the generated pseudo labels for model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These works include 1) label expansion strategies that assume the pixels with similar features are likely to be in the same category [16,27], and 2) ensemble methods that generate labels by fusing several independent predictions [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These methods could be susceptible to the label noises intro- duced by imprecise segmentation proposals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To overcome this issue, Obukhov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' proposed a regularization loss [32], which exploited the similarity between labeled and unlabeled area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Adversarial learning approach has also been applied to scribble supervised segmentation [42], by leveraging shape prior provided by additional full annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Scribble supervised segmentation generally suffers from inadequate supervi- sion and imbalanced label classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This leads to poor results, typically of under segmentation of target structures, meaning the volumes of segmented structures tend to be shrunk, as we shall describe in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To address the problem of inadequate supervision, we first investigate the principles of generating ”good scribbles”, as a guidance for designing methodologies to augment supervision, as well as for generating manual annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The aim is to model efficient scrib- bles by maximizing the supervision without increasing annotation efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Our studies demonstrate that the model training benefit from the randomness of wide range distributed scribbles and larger proportion of annotated areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In- spired by this, we propose to simulate such types of scribble-annotated images as a means of supervision augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This can be achieved via mixup and occlusion operations on existing training images, and the supervision augmen- tation is coupled with regularization terms penalizing any inconsistency in the segmentation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Despite the lack of supervision, the scribble annotations typically have imbal- anced annotated label proportions thus biased shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This means the model cannot accurately capture the global shape of target structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We therefore further propose to correct the problematic prediction using prior-based regularization, particularly from the spatial prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This requires the preceding ZScribbleSeg 3 yet critical step of estimating the mixture proportion (ratio) of each label class (referred to as π prior).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We hence propose a new algorithm to compute this π prior, based on which we develop a spatial loss on the basis of marginal proba- bility of pixels belonging to certain label classes and spatial energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This spatial loss is a regularization term aimed to correct the shape of segmentation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The supervision augmentation and prior-based regularization work in a comple- mentary way, and both contribute to the stable and robust training on a variety of segmentation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The proposed scribble supervision-based segmentation method, referred to as ZScribbleSeg, extends and generalizes the algorithms in our two preliminary works [52,53], and has more scientific significance in the following aspects: Firstly, we investigate principles of efficient scribble forms to guide the supervision aug- mentation, which have never be reported to the best of our knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Secondly, we leverage spatial prior to adjust the predicted probability with computed spa- tial energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Thirdly, we implement a series of extensive experiments on various scenarios, including irregular structure segmentation of medical pathology and visual object segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The contributions of this paper are summarized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' – We propose a unified framework for scribble-supervised segmentation by modeling efficient scribbles, and correcting the network prediction with prior regularization, which significantly alleviates the problems of inadequate su- pervision and imbalanced label classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' – To the best of our knowledge, this is the first work investigating the principles of scribble forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Motivated by the conclusion that network benefits from larger and randomly distributed annotation, we model efficient scribbles by maximizing supervision and simulating randomness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' – We propose a novel mechanism to correct the shape of model prediction based on prior regularization, including π prior, spatial prior, and shape prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' A new algorithm is introduced to estimate π prior, based on which we further encode spatial relationship with spatial prior loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' – Our approach achieved state-of-the-art performance for weakly-supervised segmentation on regular structures from cardiac anatomical imaging, regular structures from pathology enhanced imaging, irregular objects of medical pathology, and human pose from natural scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The rest of this paper is organized as follows: Section 2 briefly introduces the relevant researches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In section 3, we describe the modeling of efficient scribbles and computation of prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Section 4 presents the results of efficiency, ablation, and validation study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, we conclude this work in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1 Related work This section provides a brief review of weakly supervised segmentation meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Besides, we describe data augmentation strategies and regularization loss functions that closely related to our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 4 K Zhang & X Zhuang Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Roadmap of the proposed ZScribbleSeg framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1 Weakly supervised segmentation Recently, a variety of weakly supervised segmentation strategies have been de- veloped to reduce the manual annotation efforts [27,2,34,35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Among them, the scribbles are of particular interest for the application to medical image annota- tion, given by its advantage in annotating nest structures compared to bounding boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Current weakly supervised learning methods with image-level annotations mainly generate label seeds with Class Activation Map (CAM) [56] at first, and then train the network with refined pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, the training of CAM requires a large scale of training data labeled with rich visual classes, which is not practical in clinical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Therefore, we propose to investigate the scribble supervised segmentation, due to its efficiency and effectiveness in both medical and visual scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Scribble is a form of sparse annotation that provides labels for a small sub- set of pixels in an image [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Previous approaches mainly calculate losses for annotated pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One group of works is designed to expand the annotations and reconstruct the full label for network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, the expansion of labels needs to be achieved through iterative computation, which is particularly time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To alleviate it, several works removed the relabeling process and instead adopted conditional random fields to perform the refinement of seg- mentation results [9,7,55,40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, the common issue is the unstable model training caused by noisy pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Principles Efficient scribbles (Sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1) (Sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2) Maximal Mixup Randomness Occlusion supervision zScribbleSeg Lglobal ZScribbleNet Priors estimation (Sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3) (Sec 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4) Prior T Spatial priors Shape priors Lshape 个 Energy RankingZScribbleSeg 5 To obtain high-quality pseudo labels and update it throughout the training process, Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' [29] proposed to mix the predictions from dual-branch net- work as auxiliary pseudo label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This approach has achieved promising results on cardiac segmentation, but still susceptible to inaccurate supervisions, especially on more challenging tasks with irregular objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Obukhov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' [31] introduced the Gated CRF loss for unlabeled pixels, which regularizes model training by exploiting the structural similarity between labeled and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Other works [42,54] included a new module to evaluate the quality of segmentation masks, which encourages the predictions to be realistic, but requiring extra full annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2 Data augmentation Augmentation methods are investigated to improve the model generalization ability, by synthesizing virtual training examples in the vicinity of the training dataset [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Common strategies include random cropping, rotation, flipping and adding noise [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Recently, a line of research works have been proposed on Mixup augmentation [51,10,49,18,19], which blends two image-label pairs to generate new samples for classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Input Mixup [51] was introduced to perform linear interpolation between two images and their labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Manifold Mixup [43] applied the Mixup operation to feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Cutout [10] randomly occluded a square region of image, and CutMix [49] transplanted the occluded area to another image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' [18] proposed Puzzle Mix to leverage the saliency and local statistics to facilitate image combination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Comixup [19] extended this concept from two images to multiple images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For medical image analysis, Mixup methods have been adopted for image segmentation [8] and object detection tasks [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Although mixup operation may generate unrealistic samples, mixed soft labels can provide rich information and improve the model performance on semi-supervised segmentation [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3 Regularization losses Neural networks are used to perform pixel-wise image segmentation, typically trained with cross entropy or Dice loss, which computes loss for each pixel in- dependently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To predict segmentation coherent in the global sense [22], several methods are proposed to regularize the model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Here, we focus on the consistency regularization and π prior regularization that most relevant to our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The consistency regularization leverages the fact that the perturbed versions of the same image patch should have the consistent segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' A series of researches have been conducted on consistency regularization [57,23,41,33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For semi-supervised learning, regularization is applied to the augmented versions of the input image by requiring consistency to obtain stable predictions for unla- beled images [23,41,33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 6 K Zhang & X Zhuang Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Overview of the training losses for the proposed ZScribbleNet, which consists of modeling of efficient scribbles and computation of priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The scribble modeling includes mixup augmentation, regularized with global consistency (Lglobal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The priors have three, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', class mixture ratios (π), spatial prior and shape prior, which contribute to spatial prior loss (Lspatial) and shape prior loss (Lshape).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that spatial prior loss is complementary with the partial cross entropy loss (Lpce) which is solely calculated for labeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The proposed regularization of π prior is inspired from the binary mixture proportion estimation [3,14,37], which was originally designed for binary (two- class) positive unlabeled learning [11,12,20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For multi-class segmentation, the mixture ratios of classes are both imbalanced and inter-dependent, which cannot be solved by existing binary estimation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2 Method 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1 Overview Problem Setup: This work investigates the scenario of scribble supervised seg- mentation, where the training images are solely annotated with a small number of pixels, via scribbles, for each label class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Strategy: Instead of solely focusing on techniques of weak supervision, we first investigate different forms of scribbles to derive principles of efficient scribbles, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', maximal supervision without increasing scribble efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These principles enable effective and robust model training with minimal annotation cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we focus on tackling the major problem of under segmentation, to correct model prediction with prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Solution: We develop ZScribbleSeg consisting of (1) modeling efficient scrib- bles via supervision maximization and randomness simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (2) modeling Priors spatial (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='29) shape L (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30) π (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2) Spatial Corrected Image 1 Seg 1 Scribble 1 energy 1 shape 1 Ranking A Seg of Mixed Mixed Mixed mixed Network Seg scribble Image image π (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2) Spatial Corrected Image 2 Seg 2 Scribble 2 energy 2 shape 2 Lshape Ranking (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30) 4- Priors Spatial energy Correction Scribble Mix Image SegZScribbleSeg 7 and computation of prior, including label class proportion prior, spatial prior and shape prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (3) integration to develop deep neural network (referred to as ZScribbleNet) having losses of partial cross entropy (Lpce), global consistency (Lglobal), spatial prior loss (Lspatial), shape regularization (Lshape) and training strategy of supervision augmentation and prior regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Figure 1 presents the roadmap of the proposed framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2 Principle and modeling of efficient scribbles We investigate the principles of efficient scribbles and derive the objective of maximizing supervision with minimal annotation efforts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This leads to the pro- posal of supervision augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In addition, we propose a global consistency loss to penalize the non-equivalence in the augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Principles of efficient scribbles We shall verify the two principles of achiev- ing efficient scribble annotation in terms of maximal supervision later through the experiments in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2: (1) The large proportion of pixels annotated by scribbles compared with the whole set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (2) The randomness of distribution of scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This is represented by the ran- dom and wide-range annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Firstly, we are motivated by the knowledge that model training benefits from the finer gradient flow through larger proportion of annotated pixels [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' There- fore, we try to increase the annotation proportion with the same effort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One natural idea is to simply expand the width of scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, this way only increases the label amount in local area, and lacks the ability to enlarge anno- tation range across the entire image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Secondly, we are inspired by the fact that the imaging data are easier to be restored from random samples of pixels than from down-sampled low-resolution images with regular patterns [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This was due to the fact that the randomly and sparsely distributed samples maintain the global structure of the imaging data, which therefore can be restored with existing low-rank or self-similarity regularization terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' By contrast, the regularly down-sampled low-resolution images have evidently reduced tensor ranks, compared with the original high- resolution data, thus lose the global structure information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Motivated by this, we assume the features of full segmentation (similarly to the global structure infor- mation) can be portrayed (restored) with sparse scribble annotations randomly and widely distributed within the entire dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' With such scribble annotation, the segmentation network can easily learn the global shape prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Based on the observations described above, we propose to model efficient scribbles by supervision augmentation simulating large annotation proportion and randomness of scribble distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Modeling via supervision augmentation We aim to generate training im- ages with efficient scribbles by maximizing the supervision via mixup operations 8 K Zhang & X Zhuang and achieving the randomness via occlusion operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This resembles data augmentation, which increases the data diversity and enables robust training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Search optimal annotation with mixup: Motivated by the principles of ef- ficient scribble, we first seek the optimal scribble with large annotated ratio, high supervision, and the unchanged local features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To achieve that, instead of maximizing the annotations directly, we aim to maximize the saliency of mixed images, which measures the sensitivity of model to inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Given that the an- notated area tends to be accompanied with high saliency, maximizing saliency also increases the scribble annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For two image-scribble pairs (X1, Y1), (X2, Y2) of dimension n, we denote the resulted mixed image-label pair as (X′ 12, Y ′ 12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The transportation process is defined by: X′ 12 = T(X1, X2) and Y ′ 12 = T(Y1, Y2), (1) T(X1, X2) = (1 − β) ⊙ � 1 X1 + β ⊙ � 2 X2, (2) where T(X1, X2) represents the transportation process between image X1 and X2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' � i denotes the transportation matrix of size n×n for image Xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' β means the mask with value [0, 1] of dimension n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ⊙ is the element-wise multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we aim to maximize the saliency of transportation result over the parameters {� 1, � 2, β}: {� 1, � 2, β} = arg max � 1,� 2,β [(1 − β) ⊙ � 1M(X1) + β ⊙ � 2M(X2)], (3) where M(X) denotes the saliency map of image X, which is obtained by com- puting the l2 norm of gradient values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We solve this optimization problem based on PuzzleMix [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To preserve the local statistic features, the optimization ob- jective also includes the image local smoothness, and the mixing weight prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For details of the optimization objective, we refer readers to PuzzleMix [18] and Appendix A of supplementary materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Introduce randomness via occlusion: We propose to simulate randomly distributed scribbles via occlusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Specifically, one square area of the mixed image is randomly dropped and replaced with the background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Since that the proportion of the background annotated by scribbles tends to be smaller than that of the foreground classes, the occlusion operation alleviates the imbalance problem of class mixture ratios within labeled pixels, and further improves the results of mixture ratio estimation, which will be elaborated in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We denote the occluded image-label pair as (X′′, Y ′′), which is obtained by: X′′ 12 = (1 − 1b) ⊙ X′ 12 (4) Y ′′ 12 = (1 − 1b) ⊙ Y ′ 12 (5) where 1b denotes a rectangular mask of size n × n with value in [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The rectangular mask is randomly rotated to occlude the mixed image, and turns ZScribbleSeg 9 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Illustration of supervision augmentation and global consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Supervision maximization is achieved with the mix augmentation to increase the annotated pro- portion and data variety.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Global consistency requires the segmentation result of mixed image and unmixed image to be consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' the occluded area into background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Following [49], we set the size of rectangular to be 32 × 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Global consistency loss: The objective of global consistency regularization is to leverage the mix-invariant property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As Figure 3 shows, global consistency requires the same image patch to have consistent segmentation in two scenarios, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', the unmixed image and the mixed image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Let the segmentation result of image X predicted by network be ˆY = f(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For the transported image X′ 12 = T(X1, X2), the consistency of mixup is formulated as: T(f(X1), f(X2)) = f(T(X1, X2)), (6) which requires the segmentation of mixed image to be consistent with the mixed segmentation, after the same transportation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' When applying the occlu- sion operation, we further have: (1 − 1b) ⊙ T( ˆY1, ˆY2) = f ((1 − 1b) ⊙ T(X1, X2)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (7) Then, we propose to minimize the distance between two sides of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='(7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Let u12 = (1 − 1b) ⊙ T( ˆY1, ˆY2) and v12 = f ((1 − 1b) ⊙ T(X1, X2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The negative cosine similarity Ln(u12, v12) is defined as: Ln(u12, v12) = − u · v ||u12||2 · ||v12||2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (8) Taking the symmetrical metric into consideration, we similarly penalize the in- consistency between u21 and v21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Therefore, the global consistency loss is for- mulated as: Lglobal = 1 2 [Ln(u12, v12) + Ln(u21, v21)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (9) Supervision Global Image 1 Seg 1 Scribble 1 augmentation consistency pce Seg mixed Mixed scribble Mixed seg Mix Occlusion Lglobal Network (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='9) pce Seg 2 Scribble 2 Image 2 pce10 K Zhang & X Zhuang Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Illustration of spatial prior loss (Lspatial) for correction of prediction, via class mixture ratios (π) and spatial prior (with spatial energy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Discussion: Mixup operations could change the shape of target structures, re- sulting in the unrealistic image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To tackle it, as shown in Figure 3, we propose to combine the partial cross entropy (PCE) loss for labeled pixels of both mixed and unmixed image, and leverage mix equivalence to preserve shape consistency at global level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To further exploit the shape features, we propose to correct the network prediction guided by computed prior, which is described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3 Modeling and computation of prior As shown in Figure 1, we model class mixture ratios, spatial prior, and shape prior to better capture global shape information and regularize the network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As visualized in Figure 4, we compute the spatial energy to reflect the probabilities of pixels belonging to each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We propose a new formulation to estimate critical prior of label class proportions, referred to as π, which guides the correction of erroneous network prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Problems statement The segmentation network trained with scribbles tends to generate under segmentation results of the target structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Considering that the annotated ratio of classes can be imbalanced, the scribble supervised learning also brings challenges to the estimation of class mixture ratios π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Under segmentation: As shown in Figure 5, under segmentation refers to the results, where the size of segmented structure is generally smaller than ground truth, a phenomenon caused by the imbalanced annotated proportion and missed shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To solve the problem, we propose to evaluate π and spatial prior, which are crucial for the shape refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The accurate estimation of π can correct the imbalanced label ratios, and enable model to adjust the size of segmentation result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The computation of spatial prior is able to encode the feature similarity between pixels, and rectify the shape of target structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We Correction of prediction Scribble Left ventricle Spatial energy T estimation <- Prediction Under segmentation Corrected shape Spatial priors Class mixture ratios Right ventricle Adjusted prediction 4 Lspatial (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='29)ZScribbleSeg 11 (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Two examples of under segmentation, pointed by the red arrows: (a) under segmented foreground labels from ACDC segmentation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', left ventricle and right ventricle;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (b) under segmented background from MyoPS segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' encode π and spatial prior with spatial prior loss, by ranking the spatial energy and select the top π ratio as the segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To estimate π, we start from the imbalanced annotated ratios (referred to as a) and adapt it from labeled pixels to unlabeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that the problem of under segmentation can be even worse without the modeling of efficient scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In the case of manually annotated scribbles, the resulting annotations may be distributed in a non-random pattern due to fixed labeling habits, resulting in the biased label distribution across the whole dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This problem could be alleviated by simulating randomly distributed labels through our proposed supervision augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Challenges of π estimation: The evaluation of class mixture ratios is a criti- cal bottleneck in semi-/ weak-/ non-supervised learning, and serves as the basis of classes identification [14] and variance reduction [47,38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, existing methods are mainly proposed for binary classification, and can not be adapted to multi-class scenario directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For segmentation task, the class mixture ratios are both imbalanced and interdependent, leading to the decrease in the perfor- mance of previous binary estimation approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Despite the class imbalance problem, the scribble supervised segmentation is also faced with the imbalance of annotated class ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For example, the annotated ratio of the background tends to be much smaller than that of the foreground classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The imbalance of annotated ratio further enhances the difficulty of π estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Estimation of class mixture ratios π To tackle the under segmentation, we propose to estimate the class mixture ratios within unlabeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Objective: We aim to determine π to maximize the likelihood of observed unlabeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For nu unlabeled pixels x = [x1, x2, · · · , xnu] sampled from 12 K Zhang & X Zhuang pu(x), the likelihood of these unlabeled pixels is formulated as: L(π) = nu � i=1 pu(xi) = nu � i=1 [ m � k=1 pu(xi|ck)pu(ck)], (10) where pu(xi|ck) represents the within-class probability of class ck ∈ {c0, · · · , cm} for unlabeled pixel xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We assume the within-class probabilities of labeled and un- labeled pixels to be unchanged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we estimate π = [pu(c1), pu(c2), · · · , pu(cm)] to maximize the likelihood of unlabeled observations in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ( 10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To maximize the likelihood in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (10), we follow the EM algorithm in [24,30] and introduce the unknown variable s = (s1, s2, · · · , snu), where si is the one- hot vector of dimension m with the i-th value equals 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, the likelihood L(π|x, s) is written as: L(π|x, s) = nu � i=1 m � k=1 [pu(xi|ck)pu(ck)]sik .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (11) The log likelihood l(π|x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' s) is derived as: l(π|x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' s) = nu � i=1 m � k=1 sik log(pu(xi|ck)) + nu � i=1 m � k=1 sik log(pu(ck)) (12) E-step: The E-step of EM algorithm computes the expected value of l(s|x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π) given the observations x and current estimate of π[t],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Q(π|x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t]) =E � l(π|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' x)|x,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t]� = nu � i=1 m � k=1 E(sik|xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t] k ) log(pu(xi|ck)) + nu � i=1 m � k=1 E(sik|xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t] k ) log(pu(ck)),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (13) where E(sik|xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t] k ) is represented as: E(sik|xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t] k ) = p(sik = 1|xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' π[t] k ) = p[t] u (ck|xi) (14) Estimation of p[t] u (ck|xi): To solve the current estimate of p[t] u (ck|xi),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' we aim to adapt the posteriori probability from labeled pixels to unlabeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For labeled pixels, the posteriori probability pl(ck|xi) is estimated by the model prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For class ck and pixel xi, Based on our assumption that the within- class probabilities of labeled and unlabeled pixels are same, we have pu(xi|ck) = pl(xi|ck), (15) ZScribbleSeg 13 Based on Bayes’ theorem, the within-class probabilities of labeled pixel pl(xi|ck) and unlabeled pixel pu(xi|ck) are written as: ˆpl(xi|ck) = ˆpl(ck|xi)p(xi) ˆpl(ck) (16) ˆpu(xi|ck) = ˆpu(ck|xi)ˆpu(xi) ˆpu(ck) (17) By substituting ˆpu(xi|ck) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (17) and ˆpl(xi|ck) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (16) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (15), we adapt the within-class probabilities from labeled pixels to unlabeled pixels as follows: ˆpu(ck|xi) = ˆpl(xi) ˆpu(xi) · ˆpu(ck) ˆpl(ck) ˆpl(ck|xi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (18) For binary estimation, the mixture ratio is independently estimated for each class, which does not leverage the inter-relationship between classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For multi- class segmentation, we naturally utilize the condition that the sum of the prob- abilities of all classes equals to 1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', m � k=0 ˆpu(ck|xi) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (19) By combing Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (18) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (19), one can obtain: 1 = ˆpl(xi) ˆpu(xi) m � k=0 ˆpu(ck) ˆpl(ck) ˆpl(ck|xi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (20) Then, ˆpl(xi)/ˆpu(xi) is represented as: ˆpl(xi) ˆpu(xi) = � m � k=0 [ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)] �−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (21) By substituting ˆpl(xi)/ˆpu(xi) into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (18), we can obtain the formulation of ˆpu(ck|xi) as follows: ˆpu(ck|xi) = ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck) �m k=0[ˆpu(ck)ˆpl(ck|xi)/ˆpl(ck)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (22) Therefore, the current estimate of posteriori probability ˆpu(ck|xi) is written as: ˆpt u(ck|xi) = πt k ˆpl(ck|xi)/ˆpl(ck) �m k=0[πt k ˆpl(ck|xi)/ˆpl(ck)], (23) where ˆpl(ck) is empirically evaluated by the class frequency within labeled pixels, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', ˆpl(ck) = nk l /nl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' M-step: The M-step maximizes Q(π, π[t]) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (13), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', π[t+1] := arg max π Q(π|x, π[t]) (24) 14 K Zhang & X Zhuang We empirically solve the πt+1 k as: π[t+1] k = 1 nu nu � i=1 p[t] u (ck|xi) (25) The π[t] k is initialized with the class frequency within labeled pixels a, with ak = nk l nl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, the E-step of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (13) and M-step of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (25) is repeated until the estimation of π converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The posteriori probability ˆpu(ck|xi) and priori probability ˆpu(ck) are re-estimated in each iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Discussion: There are two conditions of the proposed algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Firstly, we assume the within-class probabilities of labeled and unlabeled pixels be the same, which means the labeled pixels should be randomly sampled based on classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Secondly, π is initiated with the class frequency of labeled pixels a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Since that the annotated ratio of background is smaller than that of the foreground classes, the priori probabilities of foreground classes within unlabeled pixels tend to be over-estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The first problem can be tackled by modeling the efficient scribbles, to achieve the random distribution of annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For the second problem, by randomly occluding the image and replace the occluded area with background, we are able to increase the ratio of background and alleviate this problem to some extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Furthermore, we propose to address it with the marginal probability maximization, which will be explained in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Computation of spatial energy Given the estimated class mixture ratios, we aim to identify the unlabeled pixels by determining the probability of pixels belonging to each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Instead of using model predictions directly, we further encode the spatial relationship to compensate the inaccurate results generated by segmentation network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Inspired by [31], we estimate the spatial energy of unlabeled pixels with energy term in a dense setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Firstly, we use Gaussian kernels Gij to measure the distance between pixels at position i and j as: Gij = exp � −(pi − pj)2 2σ2p − (oi − oj)2 2σ2o � , (26) where pi represents the position of pixel xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' oi denotes the color feature;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' σp and σo are the bandwidth parameters for position and color information, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The shallow features like color and position are specific to the pixel and do not rely on the network prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, the energy term φij leveraging prediction ˆy is formulated as: φij(ˆy) = Gij ˆyiˆyj, (27) which denotes the pairwise relationship between two pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This energy term connects every pixels with each other within one image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Based on φi,j, we define the element of spatial energy Φ in a dense setting, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Φi(ˆy) = � j∈Ωi φij(ˆy), (28) ZScribbleSeg 15 where Ωi = {Pos(i) − Pos(j) ≤ r}, means the neighborhood window of radius r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Instead of taking the total energy as the regularization loss as [31], we consider Φ as the spatial energy to reflect the relative probability of pixels belonging to each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Spatial prior and shape prior losses Spatial prior loss is computed by ranking the spatial energy and selecting the top π proportion of pixels as the segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Considering that adjusting multiple structures directly can be challenging, we instead separate each foreground class from the others, and then tackle the individual structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Given that the mixture ratios of foreground classes tend to be over-estimated, we instead leverage the accurate negative pix- els filtered by estimated mixture ratios, and maximize the marginal probability of these pixels belonging to other classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Firstly, by ranking the spatial energy and applying the mixture ratio of each class, we are able to distinguish negative pixels from unlabeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For fore- ground class ck, we rank the unlabeled pixels according to the spatial energy Φk of class ck in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Given the estimated mixture ratio πk, we set pixels in the top πk proportion to be positive samples Ωk Correspondingly, the remaining pixels are taken as negative pixels, denoted as ¯Ωk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Taking over-estimated πk into account, we believe the set of negative pixels ¯Ωk is more accurate than Ωk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Secondly, we design the spatial prior loss (Lspatial) based on maximal marginal probability of negative samples ¯Ωk belonging to other classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For each class ck, we take it as foreground and fuse other classes except ck into background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The fused class is denoted as ¯ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For pixel xi in ¯Ωk, its marginal probabil- ity belonging to ¯ck equals the sum of probabilities of the fused classes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', ˆp(¯ck|xi, xi ∈ ¯Ωk) = �m k′=1[1[k′̸=k]ˆp(ck|xi)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To maximize the marginal proba- bility of negative pixel xi belonging to ¯ck, we formulate the spatial prior loss as: Lspatial = − m � k=1 � xi∈ ¯ Ωk log(ˆp (¯ck|xi)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (29) Shape prior loss is developed to regularize inter-connected structures in the segmentation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This loss is adopted to further reduce noise and smooth boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' It requires the model prediction to be consistent with its maximum connected area, and minimizes their cross entropy loss, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lshape = − � k∈Ψ F( ˆYk) log( ˆYk), (30) where Ψ is the set of label classes with inter-connected structures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' F(·) denotes the morphological function, and outputs the largest inter-connected area of input label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4 ZScribbleNet ZScribbleSeg is achieved via a deep neural network referred to as ZScribbleNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ZScribbleNet does not depend on any particular network architecture, and can 16 K Zhang & X Zhuang Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Efficiency analysis of scribble forms for regular structure segmentation of cardiac ventricles (ACDC dataset) and irregular segmentation of myocardial pathology (MyoPS dataset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Here, Nscribble and Npix respectively denote the number of manual draws to generate scribble annotations and number of annotated pixels, which indicate annotation efforts;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' k is the number of manual draws (scribbles) and n is the given threshold of annotation efforts, where k << n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Segmentation results are evaluated on test set and reported in Dice scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Methods Nscribble Npix Structural segmentation Irregular segmentation LV MYO RV Avg Scar Edema Avg Points n n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='876±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='134 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='801±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='089 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='858±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='081 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='845±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='107 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='551±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='246 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='638±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='115 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='595±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='194 Skeleton k n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='805±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='145 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='737±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='095 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='769±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='128 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='770±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='126 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='504±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='213 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='057±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='022 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='281±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='271 Random walk k n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='798±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='173 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='698±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='153 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='753±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='157 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='744±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='165 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='516±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='284 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='529±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='123 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='522±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='184 DirRandomWork k n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='844±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='143 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='755±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='102 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='798±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='173 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='799±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='146 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='539±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='217 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='637±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='108 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='588±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='176 be directly applied to any CNN backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For all experiments, we adopt the variant of UNet [1] as the backbone of segmentation network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As Figure 2 shows, two images are mixed together to perform the supervision augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, our ZScribbleNet takes the mixed images and unmixed images as the input, and output their segmentation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For model training, images and their scribble annotations are sampled to estimate the training objective (L), which is formulated as: L = Lpce + λ1Lglobal + λ2Lspatial + λ3Lshape � �� � unsup , (31) where Lpce is the partial cross entropy loss calculated for annotated pixels in unmixed image and mixed image;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' the global consistency loss Lglobal in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (9) requires the mix equivalence for the supervision augmentation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' spatial prior loss Lspatial in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (29) encodes the π prior and spatial prior;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' shape regularization loss Lshape in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (30) leverages shape prior;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' λ1, λ2, λ3 are hyper-parameters to leverage the relative importance of different loss components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In the training phase, We warmly started training the networks with partial cross entropy loss Lpce, global consistency loss Lglobal, and shape regularization loss Lshape for 100 epochs, and then invoked the spatial loss Lspatial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In the testing phase, the trained network predicted the segmentation results of input image directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3 Experiments and Results We first investigated a variety of scribble forms, and analyzed the principles of efficient scribbles in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we performed ablation study to the proposed ZScribbleSeg in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, we demonstrated the performance of ZScribbleSeg with comparisons to other state-of-the-art methods in various segmentation tasks using four open datasets in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ZScribbleSeg 17 (a) (b) (c) (d) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Performance of segmentation networks trained by the Points scribble form with different number of pixels Npix, with comparisons to fully supervised models (FullSupUNet): (a) and (c) visualize Dice scores with respect to different Npix on ACDC and MyoPS, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The performance of models trained by the Random walk form, with increasing step length l, compared with models trained by DirRandWalk: (b) and (d) show the Dice scores of segmentation on ACDC and MyoPS, respectively, given Npix = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1 Materials Tasks and datasets Our validation included four segmentation tasks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' including (1) regular structure segmentation of cardiac ventricles from anatomical imag- ing using ACDC dataset,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (2) regular structure segmentation from pathology en- hanced imaging with a smaller training size using MSCMRseg dataset,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (3) irreg- ular object segmentation of myocardial pathology from multi-modality imaging using MyoPS dataset,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' and human pose segmentation from natural scene images using PPSS dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ACDC dataset was from the MICCAI’17 Automatic Cardiac Diagnosis Challenge [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This dataset consists of short-axis cardiac images using anatomi- cal MRI sequence (BSSFP) from 100 patients, with gold standard segmentation of cardiac ventricular structures, including left ventricle blood cavity (LV), left 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='83 Dice 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81 Points FullSupUNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 Number of annotated pixels on ACDC(n)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='79 Score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='78 Dice 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='76 Random Walk 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='75 DirRandomWalk 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0 Stepsizeofrandomwalkon ACDC0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62 Score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='61 Dice 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 Points 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='57 FullSupUNet 1 2 3 4 5 Number of annotated pixels on MyoPS(n)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='57 Score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='56 Dice 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='53 RandomWalk DirRandomWalk 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='52 1 2 3 4 5 6 7 8 StepsizeofrandomwalkonMvoPs18 K Zhang & X Zhuang ventricle myocardium (MYO), and right ventricle blood cavity (RV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For exper- iments, we randomly divided the 100 subjects into a training set of 70 subjects, a validation set of 15 subjects (particularly for ablation study), and a test set of 15 subjects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' MSCMRseg was from the MICCAI’19 Multi-sequence Cardiac MR Seg- mentation Challenge [59,58], consisting of images from 45 patients with car- diomyopathy and the gold standard segmentation of LV, MYO and RV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We employed the 45 images of late gadolinium enhanced (LGE) MRI to evaluate the segmentation of ventricle structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Following [48], we divided the 45 im- ages into three sets of 25 (training), 5 (validation), and 15 (test) images for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that this structure segmentation is more challenging than that on ACDC due to its smaller training set and pathology enhanced images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' MyoPS dataset was from MICCAI’20 Myocardial pathology segmentation Challenge [26], consisting of paired images of BSSFP, LGE and T2 cardiac MRI from 45 patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The task was to segment the myocardial pathologies, includ- ing scar and edema, which do not have regular shape or structure thus their segmentation represents a different task to the regular structure segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Following the benchmark study [26], we split the data into 20 pairs of training set, 5 pairs of validation set and 20 pairs of test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PPSS refers to the Pedestrian Parsing on Surveillance Scenes (PPSS) dataset [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We employed the task of human pose segmentation to validate the generaliz- ability of models on natural scene images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PPSS is a large scale human pars- ing dataset including 3673 annotated samples of 171 surveillance videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The ground truth segmentation of eight classes including hair, face, upper clothes, arms, lower clothes, legs, shoes, and background were provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We used the first 100 surveillance scenes for training and the remaining 71 videos for test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Evaluation metrics For experiments on ACDC, MSCMRseg and MyoPS datasets, we reported the Dice score and Hausdorff Distance (HD) on each foreground class separately following the practice of medical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' On PPSS dataset, we measured the multi-class Dice scores following [42], where Dice= 2|ˆyy| |ˆy|+|y|, and ˆy and y denote the multi-channel prediction and ground truth la- bel, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Pre-processing and implementation The two dimensional slices from ACDC and MSCMR datasets were of different resolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Hence, we first re-sampled all images into a fixed resolution of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37 × 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37 mm and then extracted the central patch of size 212 × 212 for experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For MyoPS, we took the paired slices of BSSFP, LGE, and T2 CMR and cropped their center patches of size 192 × 192 for experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We normalized the intensity of these medical images to be zero mean and unit variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For PPSS dataset, we first re-sampled all images into the same resolution, and then padded the images to the size of 160 × 160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The intensities of images were normalized to a range between 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For random occlusion, a square area of 32 × 32 was randomly occluded for each image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For the estimation of spatial energy, We adopted Gaussian kernels ZScribbleSeg 19 with color bandwidth σo = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1, position bandwidth σp = 6, and kernel radius r = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The hyper-parameters λ1, λ2, λ3 in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (31) were empirically set to be 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='05, 1, and 1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' All models were trained with a batch size of 4, learning rate of 1e−4, and augmentation of flipping and random rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We implemented our models with Pytorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' All models were trained on one NVIDIA 3090Ti 24GB GPU for 1000 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results in Dice scores and Hausdorff Distance (HD) of the ablation study using ACDC dataset, where the models were evaluated on the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that model #6 is ZScribbleSeg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bold denotes the best result, and underline indicates the best but one in each category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results in Dice Lpce Efficiency Lshape Lglobal Lspatial LV MYO RV Avg model #1 ✓ × × × × .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='863±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='089 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='804±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='063 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='774±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='150 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='813±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='112 model #2 ✓ ✓ × × × .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='870±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='100 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='063 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='843±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='082 model #3 ✓ × ✓ × × .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='915±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='068 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='871±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='056 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='871±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='058 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='886±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='064 model #4 ✓ ✓ × ✓ × .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='920±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='064 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='868±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='051 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='886±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='051 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='891±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='059 model #5 ✓ × × × ✓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='923±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='078 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='869±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='051 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='889±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='056 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='894±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='066 model #6 ✓ ✓ ✓ ✓ ✓ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='929±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='057 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='876±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='051 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='892±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='049 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='899±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='056 Results in HD (mm) Lpce Efficiency Lshape Lglobal Lspatial LV MYO RV Avg model #1 ✓ × × × × 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='86±40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='97±33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='91±44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58±40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37 model #2 ✓ ✓ × × × 119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='78±19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='90±17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='32 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='38±23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35±45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='06 model #3 ✓ × ✓ × × 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='45±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='24±23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='90 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='78±22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='16±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='89 model #4 ✓ ✓ × ✓ × 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='12±18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='26 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='41±24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='56 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='97±15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='50±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='94 model #5 ✓ × × × ✓ 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='95±36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='57 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='77±34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='69 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='51±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='34 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='08±32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='76 model #6 ✓ ✓ ✓ ✓ ✓ 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='09±8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='53 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14±14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='53 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='86±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='88 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='70±10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2 Efficiency of scribble forms In this study, we first compared four scribble forms to illustrate the efficacy of randomly annotated scribbles for supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Denoting the number of annotated pixels using a manual and skeleton-wise scribble form as n, we generated other scribble forms with the same annotated ratios for a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we studied the performance of segmentation with respect to the number of pixels annotated using a random and wide range scribble form, by setting the number of annotated pixels to different times of n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, we further explored variants of random walk annotations to show the importance of wide range in the random distribution of scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We applied two segmentation tasks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', regular structure segmentation of the cardiac ventricles on ACDC dataset and irregular segmentation of myocardial pathologies using MyoPS dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To compare the supervision of scribble forms directly, we trained all models with partial cross entropy (PCE) loss calculated for annotated pixels from scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' All experiment results were reported on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Scribble forms One can measure the efforts of scribble annotations from two perspectives, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', number of manual draws to generate scribble annotations 20 K Zhang & X Zhuang Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results and comparisons of regular structure segmentation on ACDC dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' These models were evaluated on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Methods Dice HD (mm) LV MYO RV Avg LV MYO RV Avg PCE .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='805±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='145 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='737±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='095 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='769±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='128 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='770±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='126 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='55±36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30±27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='77 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62±42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='62 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40±35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='76 WSL4 [29] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='835±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='164 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='825±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='032 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='787±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='191 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='792±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='166 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='48±16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='01 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='48±22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='74 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='21±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='72±17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='67 GatedCRF [31] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='846±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='157 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='744±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='108 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='822±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='111 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='804±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='135 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='38±46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30±15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='72 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='88±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='85 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='85±30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='03 MAAG [42] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='879 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='817 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='752 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='816 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='23 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='83 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='73 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='93 CVIR [14] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='866±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='127 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='797±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='102 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='737±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='130 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='800±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='130 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='51±50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='82 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='70±8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39±9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='00 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='20±34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='17 nnPU [20] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='134 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='792±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='124 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='829±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='102 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='828±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='123 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='28±48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60±17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='93 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='64±8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='51±38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='43 CycleMix [52] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='876±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='096 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='794±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='083 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='829±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='099 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='098 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60±19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='90 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04±17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='78 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='09±21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='91±19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='57 ShapePU [53] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='885±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='103 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='806±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='096 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='851±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='089 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='100 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='17±22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81±33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='06±26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='43 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35±29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='33 ZScribbleSeg .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='900±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='065 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='825±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='069 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='102 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='086 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='69±6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='94 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='93±6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='40 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='74±12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='48 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='79±9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='19 FullSupUNet .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='882±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='123 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='824±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='099 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='856±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='112 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='854±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='113 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='94±13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='65±12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='52 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='82±9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='69 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='97 (Nscribble) and number of annotated pixels (Npix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Given the certain amount of efforts, we designed four forms following different generation procedures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', (1) Skeleton, (2) Random walk, (3) Directed random walk (DirRandomWalk), (4) Points, and compared the segmentation performance of models trained us- ing such scribble annotations for supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The details of scribble forms are described bellow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Skeleton indicates the widely adopted scribble form by a rater, who approx- imately outlines the shape of each label class within the segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For a segmentation task with k label classes, including the background, one needs k manual draws (scribbles) for a training image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For ACDC dataset, we adopted the manual annotated skeleton scribble released by [42];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' while for pathologies in MyoPS dataset, we generated the skeleton scribbles automatically using the skeletonization algorithm [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We refer the reader to Appendix B of the supple- mentary material for generation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Random walk starts from a random point within the segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, the annotation moves along a random direction of image lattice within the segmentation mask, with a given step length (l by default set to 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We repeated such moves until the ratio or number of annotated pixels reached a threshold (n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Directed random walk, DirRandomWork for short, is the random walk with momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The scribble generated by Random walk tends to cluster within a local area of the radius √r given r-step walks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To achieve wide range distri- bution without manually setting the step length (l), we therefore adopted this directed random walk, which prefers moving along the same direction to the pre- vious step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' If the next point does not lie in the segmentation mask, we changed the walking direction to be along the smallest angle to the previous one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Points scribble form refers to an ideal form, which randomly samples anno- tated pixels within the segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, it is difficult to generate such scribble annotation in practice, due to the huge number of manual draws which equals the number of annotated pixels, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Nscribble = Npix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Therefore, we considered this form as the upper bound of scribble supervision under the same ratio of annotated pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ZScribbleSeg 21 Image Ground Truth PCE CVIR nnPU WSL4 GatedCRF CycleMix ShapePU ZScribbleSeg FullSupUNet Dice (Avg) :MYO :RV Median Worst :LV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='758 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='827 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='894 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='852 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='870 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='897 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='907 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='903 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='486 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='390 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='472 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='628 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='618 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='386 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='262 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='773 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='544 Dice (Avg) Scribble Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Visualization of cardiac segmentation on ACDC dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The two slices were from the median and the worst cases by the average Dice scores of all compared meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results Given the same amount of annotated pixels, we show the effect of dif- ferent scribble forms on regular structures (ACDC) and irregular objects (My- oPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As Table 1 illustrates, when the four scribble forms had the same number of annotated pixels Npix, Points achieved the best Dice scores on both of the structural segmentation and irregular segmentation tasks, thanks to the effects of randomness and wide range distribution of scribbles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' However, when we limited the efforts of manual draws to be the same, DirRandomWalk became more favor- able, as the scribble form of Points could be impractical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Furthermore, Skeleton scribble was illustrated to be the least efficient form, particularly the segmenta- tion network trained on such dataset performed poorly on the irregular object segmentation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This was probably due to the fact that when the target was difficult to outline, Skeleton form could fail to portray the entire segmentation, leading to poor performance or even a failure in training the segmentation net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' On the contrary, randomly distributed scribble forms, such as Random walk and DirRandomWalk, demonstrated their superiority, particularly on the irregular object segmentation with remarkable improvements on average Dice over Skeleton of 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1% and 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='7%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Number of annotated points: By varying the number of annotated pixels (Npix), we validated the influence of annotated proportions on scribble super- vised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As shown in Figure 6 (a) and (c), the model performance tended to be improved as Npix increases, indicating that model training bene- fited from larger proportion of annotated pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can observe from Figure 6 (a) that the segmentation performance started converging when Npix reached 2n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' By contrast, for the more difficult segmentation task on irregular objects, as Figure 6 (c) illustrates, the model performance converged after Npix ≥ 4n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Wide-ranged distribution: We further investigated the influence of wide range distribution of scribbles, by training networks with varying step length l in Random walk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' As the step length increases, the label distribution range of Random walk gradually expanded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' From Figure 6 (b) and (d), one can see that the segmentation performance of average Dice scores was improved as the step length increased, and the performance gradually converged to that of DirRan- domWalk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This confirmed that the widely distributed scribbles were better to provide finer supervision under the same number of draws and annotated pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 22 K Zhang & X Zhuang Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results and comparisons of regular structure segmentation on pathology enhanced images (LGE CMR) using MSCMRseg dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Methods Dice HD (mm) LV MYO RV Avg LV MYO RV Avg PCE .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='514±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='078 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='582±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='067 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='058±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='023 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='385±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='243 259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4±14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='19 228.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1±21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='36 257.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4±12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='43 248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3±21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='63 WSL4 [29] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='902±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='040 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='815±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='033 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='828±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='101 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='95±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='88 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='07±13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='48 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='08±6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='57 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37±31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04 GatedCRF [31] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='917±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='044 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='825±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='032 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='073 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='863±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='066 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='72±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='92±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='10 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='83±5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='16±7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='11 CVIR [14] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='331±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='371±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='088 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='404±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='110 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='368±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='095 259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2±14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='23 243.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='0±13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='76 180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='9±55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44 227.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='7±47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='63 nnPU [20] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='341±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='067 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='538±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='081 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='432±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='100 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='437±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='115 259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4±14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='19 201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='6±66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='98 199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='7±57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='50 220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2±57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='70 CycleMix [52] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='748±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='064 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='730±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='047 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='835±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='041 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='771±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='069 224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59±35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='26±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='77 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='36±51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='74±92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='65 ShapePU [53] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='880±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='046 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='785±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='080 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='087 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='082 178.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='02±50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='93 178.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='05±25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35±55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='78 181.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81±45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27 ZScribbleSeg .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='922±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='039 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='834±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='039 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='854±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='055 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='870±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='058 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='10±14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='70 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='52±19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='03±39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='55±31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='39 FullSupUNet .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='909±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='049 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='821±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='054 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='826±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='087 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='852±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='02±12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='36 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='89±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='34 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='91±41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='99 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27±33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='63 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3 Ablation study We studied the effectiveness of the proposed strategies in modeling efficient scrib- bles and prior regularization for ZScribbleNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We used the ACDC dataset and the expert-made scribble annotations released by [42], and evaluated the model performance on the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We compared six ablated models which were trained with or without the usage of modeling efficient scribbles (denoted as Efficiency), and with different combinations of the four loss functions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', the partial cross entropy (Lpce), the global consistency loss (Lglobal) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (9), the spatial prior loss (Lspatial) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (29), and the shape prior loss (Lshape) in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (30).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 2 presents the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' When model #2 adopted the proposed super- vision augmentation to model efficient scribbles (indicated by the column of Efficiency), its performance was improved compared to model #1, as one can see from their average Dice scores (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='813) and average HDs (65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35 mm vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 mm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This demonstrated the benefits of model training from the augmented supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' When combining the supervision augmentation with the global consistency loss (Lglobal), leading to model #4, the performance was fur- ther boosted with remarkable improvements, namely 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3% gain in Dice (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='891 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848) and over 45 mm error reduction in HD (19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='50 mm vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35 mm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Al- ternatively, when leveraging inter connectivity via the shape regularization loss (Lshape), model #3 obtained an overwhelming improvement in HD, which was reduced from 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 mm to only 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='16 mm compared to model #1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This indicated the results were with much less noisy and outlier segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We then further investigated the advantage of spatial prior (Lspatial) in training ZScribbleNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can see from the result of model #5 that it achieved the most evident gain in terms of Dice results, with an improvement of 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1% (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='894 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='813) by solely including one extra loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, our ZScribbleSeg (model #6) achieved the best performance with average Dice of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='899 and HD of 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='70 mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This indicated that the combination of efficient scribbles and priors endowed the algorithm with sub- stantial supervision and prior knowledge for scribble-supervised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4 Performance and Comparisons We conducted experiments over the four segmentation tasks stated in Sec- tion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (1) For the structural segmentation of cardiac ventricles from ACDC ZScribbleSeg 23 Image Ground Truth PCE CVIR nnPU ShapePU WSL4 GatedCRF CycleMix ShapePU ZScribbleSeg FullSupUNet Dice (Avg) :MYO :RV Median Worst :LV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='389 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='353 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='412 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='886 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='885 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='787 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='865 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='893 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='880 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='370 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='328 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='723 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='814 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='735 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='723 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='829 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='830 Scribble Dice (Avg) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Visualization of cardiac segmentation on LGE CMR using MSCMRseg dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The two slices were from the median and the worst cases by the average Dice scores of all compared methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' dataset, we used the expert-made scribbles released by [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (2) For the car- diac structural segmentation from pathology enhanced imaging (MSCMRseg) dataset, we used the manually annotated scribbles released by [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (3) For the irregular myocardial pathology segmentation from MyoPS dataset, we first adopted the standard skeletonization algorithm for the simulated scribble anno- tation of pathologies [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we manually annotated skeleton scribbles for the structures of LV, Myo, RV and background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (4) For the human pose seg- mentation from PPSS dataset, we adopted the scribble annotations generated by the standard skeletonization algorithm [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We compared ZScribbleSeg with eight to nine methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We first implemented the PCE loss (Lpce) as a baseline method (referred to PCE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we imple- mented four state-of-the-art (SOTA) scribble supervised segmentation methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', WSL4 [29], GatedCRF [31], CycleMix [52], and ShapePU [53] to run the same experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We cited the ACDC and PPSS results reported in [42] for the MAAG method, which is also a SOTA method for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Furthermore, we adopted two semi-supervised SOTA methods based on positive unlabeled learn- ing, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', CVIR [14] and nnPU [20], and re-implemented to adapt them for the scribble-supervised segmentation tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' For more details of adaptation, the read- ers are referred to Appendix C of the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, we trained UNet with full annotations as a baseline of fully-supervised approach (referred to as FullSupUNet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that the post-processing steps of all experiments were removed for a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Structure segmentation from anatomical images Table 3 presents the Dice and HD results of 10 approaches for regular structure segmentation of car- diac ventricles from ACDC dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can observe that ZScribbleSeg achieved average Dice of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862 and HD of 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='79 mm, outperforming the other scribble- supervised methods evidently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The quantitative results of ZScribbleSeg were comparable to (or slightly better than) that of the fully supervised method (Full- supUNet) whose average Dice and HD are 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='854 and 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14 mm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Particularly, the HD results of ZScribbleSeg (9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='79 mm) and FullSupUNet (13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='14 mm) were evidently much better than the other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that HD is highly sensitive to the noisy and outlier segmentation results, which are com- monly seen when the supervision of global shape information is not sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 24 K Zhang & X Zhuang Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Results and comparisons of irregular segmentation of myocardial pathologies on MyoPS dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Methods Dice HD (mm) Scar Edema Avg Scar Edema Avg PCE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='504±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='213 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='057±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='022 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='281±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='271 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='68±33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='95 147.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='61±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59 115.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='15±43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='00 WSL4 [29] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='031±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='029 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='106±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='033 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='069±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='049 172.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='37±45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='13 170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='05±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44 171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='20±34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60 GatedCRF [31] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='020±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='042±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='020 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='031±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='019 173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='60±44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='98 170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='10±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44 171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='8±34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='53 CVIR [14] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='505±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='214 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='080±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='031 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='293±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='263 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='59±32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='09 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='83 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='43±41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='86 nnPU [20] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='530±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='241 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='085±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='035 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='308±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='282 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='88±23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='55 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='83 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='07±44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='47 CycleMix [52] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='550±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='237 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='626±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='124 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='588±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='191 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='64±42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='97±40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='87 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='81±42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='13 ShapePU [53] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='558±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='237 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='615±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='587±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='205 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='33±31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='58 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='00±31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='42 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='16±31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='17 ZScribbleSeg 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='596±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='237 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='676±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='113 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='636±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='188 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='73±20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='05±24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='30 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='89±21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='98 FullSupUNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='607±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='253 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='659±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='135 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='633±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='202 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='35±35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='73 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='53±33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='15 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='44±34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='27 Ground Truth PCE CVIR nnPU ShapePU CycleMix ZScribbleNet FullSupUNet Dice (Scar);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Dice (Edema) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='488;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='039 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='478;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='054 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='667;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='062 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='591;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='597 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='558;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='616 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='671;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='637 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='716;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='713 Median cases :Scar :Edema + ( ) Dice (Scar);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Dice (Edema) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='563;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='042 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='564;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='061 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='677;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='059 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='726;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='684 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='707;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='698 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='755;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='750 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='705;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='686 Image GatedCRF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='041;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='074 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='028;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='056 WSL4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='041;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='180 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='028;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='101 Scribble Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Visualization of irregular segmentation of myocardial pathologies on MyoPS dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The two slices were from the median cases by average Dice scores of edema or scar segmentation of all compared methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The results indicate the proposed efficient scribble modeling and prior regular- ization were able to alleviate the problem of inadequate supervision and incom- plete shape information from training images with scribble annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, Figure 7 visualizes two typical cases (median and worst) for illustration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Structure segmentation from pathology enhanced images The anatomi- cal segmentation from pathology enhanced images, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', LGE CMR of MSCMRseg dataset, was a more challenging task compared to that of ACDC dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This is because MSCMRseg was a smaller dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : 25 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 70 training subjects), and the image quality and appearance pattern of LGE CMR could be worse and more complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 4 provides the quantitative results, and Figure 8 visualizes two special examples (median and worst) for demonstration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ZScribbleSeg achieved promis- ing performance and better Dice and HD results than the other SOTA methods for scribble supervised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Notice that for this particular challenging task, the two general semi-supervised segmentation methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', CVIR and nnPU, could not work properly, which was confirmed by the two failed segmen- tation examples visualized in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Finally, similar to the results in previous study (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='4), ZScribbleSeg and FullSupUNet could achieve less noisy segmentation, affirmed by the remark- able better HD results in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Hence, we second to the conclusion that the proposed ZScribbleNet received greatly augmented supervision and global shape information via the proposed efficient scribble modeling and prior regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' CCCCCCCZScribbleSeg 25 Irregular segmentation For segmentation of objects with heterogeneous shape features, it becomes particularly challenging to learn the accurate shape infor- mation for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We evaluated ZScribbleSeg on such challenging task of ir- regular segmentation using myocardial pathology segmentation (MyoPS), where we removed the shape regularization term Lshape due to the nature of pathologies lacking such property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 5 shows the performance in detail, and Figure 9 visualizes two typical cases, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', median cases by average Dice scores of edema and scar segmenta- tion, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can find that the advantages of the proposed methodolo- gies were demonstrated evidently in such challenging task, as the performance gains, either in terms of Dice or HD, were significant from CycleMix, ShapePU and finally to ZScribbleSeg compared to PCE, WSL4, GatedCRF, CVIR and nnPU (p < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In fact, the scribble-supervised segmentation of edema by the compared five methods were failed, and so were the segmentation of scar for WSL4 and GatedCRF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' This is illustrated in the visualized examples in Fig- ure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Although WSL4 and GatedCRF worked well, with scribble supervision, in the above two regular structure segmentation tasks, they suffered severely from noisy labels due to their dependence of training on pseudo labels, which leads to the failure of model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Furthermore, due to the similar texture between edema and surrounding tissues in all imaging modalities, it could be extremely difficult to segment such pathology relying solely on training images without ro- bust estimation and regularization of class mixture ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can see from the result that this failed all the five compared methods in edema segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' By contrast, ShapePU and ZScribbleSeg succeeded in this task thanks to their own methods of estimating class prior π and applying spatial regularization, which is affirmed by the fact that they both achieved good HDs comparable to that of FullSupUNet for scar and edema segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Notice that CycleMix did not illustrate such good performance in terms of HDs, but it achieved comparable good Dice scores thanks to the adoption of supervision augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Segmentation from natural scenes We further validated the broad utility of ZScribbleSeg on the human pose segmentation task of natural scene images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' We applied all the methods on the PPSS dataset, which consists of pedestrian images with occlusions, generated by different cameras with different resolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 6 presents the details, together with the summarized results from pre- vious three studies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', ACDC, MSCMRseg and MyoPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Similar to the three medical image segmentation tasks, the model of ZScribbleSeg generalized well to this 3-channel colored natural image segmentation task, with the performance comparable to FullSupUNet and Dice accuracy setting new state of the art for scribble supervised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Figure 10 visualizes three special cases, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', the best, median and the worst cases according to the average Dice by all compared methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' One can see from the figures that ZScribbleNet performed robustly and generated realistic seg- mentation with less noisy results, particularly compared with other scribble su- pervised methods and the fully supervised one (FullSupUNet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 26 K Zhang & X Zhuang Image Ground Truth PCE CVIR nnPU ShapePU CycleMix ZScribbleSeg FullSupUNet WSL4 Best case Median case Worst case 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='721 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='736 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='688 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='699 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='745 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='708 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='781 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='690 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='821 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='795 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='814 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='791 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='817 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='832 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='860 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='869 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='795 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='871 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='868 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='885 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='898 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='908 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='914 Dice (Avg) Scribble Dice (Avg) Dice (Avg) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Visualization of results on PPSS dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The selected subjects were the best, median and worst cases by the average Dice scores of all compared methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Dice results of the 10 methods on the four datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Note that sizes of training sets are given in the brackets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Methods ACDC MSCMRseg MyoPS PPSS (70) (25) (20) (2828) PCE .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='770±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='126 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='385±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='243 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='281±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='271 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='805±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='063 WSL4 [29] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='792±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='166 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='762±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='045 GatedCRF [31] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='804±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='135 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='825±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='032 MAAG [42] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='816 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='746 CVIR [14] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='800±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='130 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='368±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='095 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='293±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='263 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='809±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='054 nnPU [20] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='828±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='123 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='437±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='115 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='308±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='282 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='794±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='055 CycleMix [52] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='098 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='771±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='069 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='588±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='191 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='835±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='050 ShapePU [53] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='848±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='100 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='833±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='082 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='587±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='205 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='823±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='055 ZScribbleSeg .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='862±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='086 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='870±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='058 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='636±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='188 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='838±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='050 FullSupUNet .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='854±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='113 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='852±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='076 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='633±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='202 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='843±.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='071 4 Conclusion In this work, we have presented a new framework for scribble-supervised segmen- tation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', ZScribbleSeg, to integrate the efficient scribbles and prior regulariza- tion with implementation of a deep neural network (ZScribbleNet).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ZScribbleSeg exploits the principles of ”good scribble annotations”, and effectively augments the scribble supervision of ZScribbleNet, via mixup-occlusion operations and ZScribbleSeg 27 global consistency regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Then, we explored to capture the global in- formation by incorporating the prior information, particularly with proposals of spatial prior loss and shape prior loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The spatial prior loss was based on the estimated spatial energy and label class mixture proportions π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' The former provides a new means to identify the probability of unlabeled pixels belonging to each class without directly using model predictions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' and the latter was developed based on a novel estimation method and was aimed to correct the problematic prediction via the regularization of spatial prior loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' To examine to performance of ZScribbleSeg, we investigated a variety of seg- mentation tasks, including regular structural segmentation of cardiac ventricles from anatomical imaging data (using ACDC dataset), regular structural segmen- tation of pathology enhanced imaging data (MSCMRseg), irregular object seg- mentation from multi-modality imaging (MyoPS), and human pose segmentation from natural scenario (PPSS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Compared to others approaches, ZScribbleSeg has shown great competence and achieved comparable performance to the fully su- pervised UNet method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Particularly, thanks to the augmented supervision and prior regularization, ZScribbleSeg performed well and demonstrated reliability and generalizability in the scenarios with small training set (MSCMRseg task) and irregular structure segmentation (MyoPS task), both of which failed several other state-of-the-art approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Baumgartner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Koch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Pollefeys, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Konukoglu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': An exploration of 2d and 3d deep learning techniques for cardiac mr image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Inter- national Workshop on Statistical Atlases and Computational Models of the Heart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 111–119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2017) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bearman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Russakovsky, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ferrari, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Fei-Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': What’s the point: Semantic segmentation with point supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: European conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 549–565.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2016) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bekker, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Davis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Estimating the class prior in positive and unlabeled data through decision tree induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the AAAI Conference on Arti- ficial Intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 32 (2018) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bernard, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lalande, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zotti, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Cervenansky, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Yang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Heng, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Cetin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lekadir, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Camara, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Gonzalez Ballester, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sanroma, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Napel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Petersen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tziritas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Grinias, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Khened, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Kollerathu, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Krishna- murthi, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Roh´e, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Pennec, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sermesant, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Isensee, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', J¨ager, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Maier- Hein, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Full, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wolf, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Engelhardt, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Baumgartner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Koch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wolterink, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Iˇsgum, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Patravali, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Humbert, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jodoin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: Is the problem solved?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Medical Imaging 37(11), 2514–2525 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1109/TMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2837502 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bishop, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Training with noise is equivalent to tikhonov regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Neural computation 7(1), 108–116 (1995) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Bishop, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Nasrabadi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Pattern recognition and machine learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2006) 28 K Zhang & X Zhuang 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Can, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Chaitanya, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Mustafa, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Koch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Konukoglu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Baumgart- ner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Learning to segment medical images with scribble-supervision alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: DLMIA/ML-CDS@MICCAI (2018) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Chaitanya, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Karani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Baumgartner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Becker, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Donati, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Konukoglu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Semi-supervised and task-driven data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International confer- ence on information processing in medical imaging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 29–41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2019) 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Papandreou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Kokkinos, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Yuille, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Deeplab: Se- mantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelli- gence 40(4), 834–848 (2017) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' DeVries, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Taylor, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Improved regularization of convolutional neural net- works with cutout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04552 (2017) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Du Plessis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Niu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sugiyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Convex formulation for learning from pos- itive and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1386–1394.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PMLR (2015) 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Du Plessis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Niu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sugiyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Analysis of learning from positive and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Advances in neural information processing systems 27, 703–711 (2014) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Gao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Robust approximations of low-rank minimization for tensor completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Neurocomputing 379, 319–333 (2020) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Garg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Smola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Balakrishnan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lipton, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Mixture proportion estimation and pu learning: A modern approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Advances in Neural Information Processing Systems 34 (2021) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Liu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Weakly-supervised semantic segmentation network with deep seeded region growing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 7014–7023 (2018) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Ji, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Shen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ma, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Gao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Scribble-based hierarchical weakly supervised learning for brain tumor segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Conference on Medical Im- age Computing and Computer-Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 175–183.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2019) 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Khoreva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Benenson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hosang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Schiele, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Simple does it: Weakly supervised instance and semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 876–885 (2017) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Choo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Song, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Puzzle mix: Exploiting saliency and local statis- tics for optimal mixup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Conference on Machine Learning (ICML) (2020) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Choo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jeong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Song, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Co-mixup: Saliency guided joint mixup with supermodular diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Conference on Learning Represen- tations (2021) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Kiryo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Niu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', du Plessis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sugiyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Positive-unlabeled learning with non-negative risk estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 30 (2017) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Koch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Rajchl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Bai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Baumgartner, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Passerat-Palmbach, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Aljabar, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Rueckert, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Multi-atlas segmentation using partially annotated data: methods and annotation strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence 40(7), 1683–1696 (2017) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Kohl, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Romera-Paredes, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Meyer, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', De Fauw, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ledsam, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Maier-Hein, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Eslami, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jimenez Rezende, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ronneberger, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': A probabilistic u-net for segmentation of ambiguous images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Advances in neural information processing sys- tems 31 (2018) 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Laine, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Aila, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Temporal ensembling for semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='02242 (2016) ZScribbleSeg 29 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Latinne, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Saerens, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Decaestecker, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Adjusting the outputs of a classifier to new a priori probabilities may significantly improve classification accuracy: evi- dence from a multi-class problem in remote sensing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: ICML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 298–305 (2001) 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' nature 521(7553), 436–444 (2015) 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Martin-Isla, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ankenbrand, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Myops: A benchmark of myocardial pathology segmentation combining three-sequence cardiac magnetic resonance images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='03186 (2022) 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Lin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Dai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Scribblesup: Scribble-supervised convolu- tional networks for semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3159–3167 (2016) 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Luo, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Pedestrian parsing via deep decompositional network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE international conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2648– 2655 (2013) 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Liao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Song, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Scribble- supervised medical image segmentation via dual-branch network and dynamically mixed pseudo labels supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Medical Image Computing and Computer Assisted Intervention (2022) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' McLachlan, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Krishnan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': The EM algorithm and extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' John Wiley & Sons (2007) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Obukhov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Georgoulis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Dai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Gool, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Gated crf loss for weakly super- vised semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' ArXiv abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04651 (2019) 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Obukhov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Georgoulis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Dai, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Van Gool, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Gated crf loss for weakly supervised semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='04651 (2019) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Ouali, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Hudelot, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tami, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Semi-supervised semantic segmentation with cross-consistency training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 12674–12684 (2020) 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Papandreou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Yuille, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Weakly-and semi- supervised learning of a deep convolutional network for semantic image segmenta- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE international conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1742–1750 (2015) 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Pathak, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Shelhamer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Long, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Darrell, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Fully convolutional multi-class multiple instance learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='7144 (2014) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Rajchl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Koch, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ledig, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Passerat-Palmbach, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Misawa, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Mori, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Rueckert, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Employing weak annotations for medical image analysis problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' arXiv preprint arXiv:1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='06297 (2017) 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Ramaswamy, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Scott, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tewari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Mixture proportion estimation via kernel embeddings of distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2052–2060.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PMLR (2016) 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Sakai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Plessis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Niu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sugiyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Semi-supervised classification based on classification from positive and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International con- ference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2998–3006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PMLR (2017) 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Tajbakhsh, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jeyaseelan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Li, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Chiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ding, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Embracing imperfect datasets: A review of deep learning solutions for medical image segmen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Medical Image Analysis 63, 101693 (2020) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Tang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Perazzi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Djelouah, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ayed, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Schroers, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Boykov, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': On regularized losses for weakly-supervised cnn segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: ECCV (2018) 30 K Zhang & X Zhuang 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Tarvainen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Valpola, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Guyon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Luxburg, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Vishwanathan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=') Advances in Neural Information Processing Systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' (2017) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Valvano, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Leo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Tsaftaris, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Learning to segment from scribbles using multi-scale adversarial attention gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Medical Imaging pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1–1 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1109/TMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='3069634 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Verma, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lamb, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Beckham, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Najafi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Mitliagkas, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lopez-Paz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ben- gio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Manifold mixup: Better representations by interpolating hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 6438–6447.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' PMLR (2019) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Focalmix: Semi-supervised learning for 3d medical image detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 3951–3960 (2020) 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Sun, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Van Gool, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Looking beyond single images for weakly su- pervised semantic segmentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Kan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Shan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 12275–12284 (2020) 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Wu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Minimizing estimated risks on unlabeled data: A new for- mulation for semi-supervised medical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence (2022) 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Yue, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Ye, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Xu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Cardiac segmentation from lge mri using deep neural network incorporating shape and spatial priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Conference on Medical Image Computing and Computer-Assisted Intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 559–567.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Springer (2019) 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Yun, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Han, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Oh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Chun, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Choe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Yoo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Cutmix: Regularization strategy to train strong classifiers with localizable features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: International Con- ference on Computer Vision (ICCV) (2019) 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Xiao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jiao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Wei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Affinity attention graph neural net- work for weakly supervised semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence (2021) 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Cisse, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Dauphin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lopez-Paz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': mixup: Beyond empirical risk minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' International Conference on Learning Representations (2018), https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='id=r1Ddp1-Rb 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Cyclemix: A holistic strategy for medical image segmen- tation from scribble supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 11656–11665 (2022) 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Shapepu: A new pu learning framework regularized by global consistency for scribble supervised cardiac segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Medical Image Computing and Computer Assisted Intervention (2022) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Zhong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Accl: Adversarial constrained-cnn loss for weakly supervised medical image segmentation (2020) 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Jayasumana, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Romera-Paredes, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Vineet, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Su, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Du, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Huang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Torr, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' : Conditional random fields as recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Pro- ceedings of the IEEE international conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 1529–1537 (2015) ZScribbleSeg 31 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Khosla, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Lapedriza, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Oliva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Torralba, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Learning deep features for discriminative localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2921–2929 (2016) 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Park, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Isola, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Efros, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Unpaired image-to-image translation using cycle-consistent adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: Proceedings of the IEEE interna- tional conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' 2223–2232 (2017) 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Multivariate mixture model for cardiac segmentation from multi- sequence mri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' In: MICCAI (2016) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Multivariate mixture model for myocardial segmentation combining multi-source images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelli- gence 41(12), 2933–2946 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='1109/TPAMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content='2869576 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Zhuang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=', Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=': Multi-scale patch and multi-modality atlases for whole heart segmentation of mri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} +page_content=' Medical image analysis 31, 77–87 (2016)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BNE4T4oBgHgl3EQfFAx2/content/2301.04882v1.pdf'} diff --git a/D9FRT4oBgHgl3EQfxziA/content/tmp_files/2301.13643v1.pdf.txt b/D9FRT4oBgHgl3EQfxziA/content/tmp_files/2301.13643v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..d637d047b725b1e0c19e2437b8ddc67017fc68a5 --- /dev/null +++ b/D9FRT4oBgHgl3EQfxziA/content/tmp_files/2301.13643v1.pdf.txt @@ -0,0 +1,1909 @@ +arXiv:2301.13643v1 [math.CA] 31 Jan 2023 +Some Expansion Formulas for Brenke +Polynomial Sets +Hamza Chaggara, Abdelhamid Gahami and Neila Ben +Romdhane +Last Revised: +February 1, 2023 +Abstract. In this paper, we derive some explicit expansion formulas +associated to Brenke polynomials using operational rules based on their +corresponding generating functions. The obtained coefficients are ex- +pressed either in terms of finite double sums or finite sums or sometimes +in closed hypergeometric terms. The derived results are applied to Gen- +eralized Gould-Hopper polynomials and Generalized Hermite polynomi- +als introduced by Szeg¨o and Chihara. Some well-known duplication and +convolution formulas are deduced as particular cases. +Mathematics Subject Classification (2010). 33C45, 41A10, 41A58. +Keywords. Brenke polynomials, Connection coefficients, Generalized +Gould-Hopper polynomials, Generalized Hermite polynomials, Generat- +ing functions, Linearization coefficients. +Contents +1. +Introduction +2 +2. +Operators Associated to Brenke PSs +4 +2.1. +Transfer Operator Associated to two Brenke Polynomials +4 +2.2. +XD-Expansion of the Operator θ +5 +2.3. +Examples +5 +2.3.1. +Hypergeometric Transformation +6 +2.3.2. +Particular Hypergeometric Transformation +7 +2.3.3. +Dunkl Operator on the Real Line +8 +3. +Connection and Linearization Problems +9 +3.1. +Connection Problem +9 +3.1.1. +Explicit Expression of the Connection Coefficients +10 +3.1.2. +Connection between two Db-Appell PSs +11 +3.1.3. +Addition and Convolution Type Formulas +11 + +2 +H. Chaggara, A. Gahami and N. Ben Romdhane +3.1.4. +Duplication Formula +11 +3.2. +Linearization Problems +12 +3.2.1. +Appell Polynomials +13 +3.2.2. +Explicit Expression of the LC +13 +4. +Application to Generalized Gould-Hopper Polynomial Set +14 +4.1. +Connection Problem +14 +4.2. +Linearization Formula +16 +4.3. +Generalized Hermite Polynomials +17 +References +18 +1. Introduction +Let P be the vector space of polynomials with coefficients in C. A polynomial +sequence in P is called polynomial set (PS for short) if deg Pn = n, for all n. +The connection and linearization problems are defined as follows. +Given two PSs {Pn}n≥0 and {Qn}n≥0, the so-called connection problem be- +tween them asks to find the coefficients Cm(n), called connection coefficients +CC, in the expression +Qn(x) = +n +� +m=0 +Cm(n)Pm(x). +(1.1) +The particular cases Qn(x) = xn and Qn(x) = Pn(ax), a ̸= 0, in (1.1) are +known, respectively, as the inversion formula for {Pn}n≥0 and the duplication +or multiplication formula associated with {Pn}n≥0. +Given three PSs {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0, then for +Qi+j(x) = +Ri(x)Sj(x) in (1.1) we are faced to the general linearization +problem +Ri(x)Sj(x) = +i+j +� +k=0 +Lij(k)Pk(x). +(1.2) +The coefficients Lij(k) are called linearization coefficients LC. +The particular case of this problem, Pn = Rn = Sn, is known as the standard +linearization problem or Clebsch-Gordan-type problem. +The computation and the positivity of the aforementioned coefficients +play important roles in many situations of pure and applied mathemat- +ics ranging from combinatorics and statistical mechanics to group theory +[4, 21, 23]. Therefore, different methods have been developed in the litera- +ture and several sufficient conditions for the sign properties to hold have +been derived in [3, 31], using for this purpose specific properties of the in- +volved polynomials such as orthogonality, generating functions, inversion for- +mulas, hypergeometric expansion formulas, recurrence relations, algorithmic +approaches, inverse relations,. . . (see e.g.[1, 2, 8, 13, 24, 32]). In particular, a +general method, based on operational rules and generating functions, was + +Expansion Formulas for Brenke Polynomials +3 +developed for polynomial sets with equivalent lowering operators and with +Boas-Buck generating functions [6,12,14]. +In this paper, we deeply discuss both the connection and the lineariza- +tion problems when the involved polynomials are of Brenke type. These poly- +nomials are defined by their exponential generating functions as follows [9,17] +A(t)B(xt) = +∞ +� +n=0 +Pn(x) +n! +tn, +(1.3) +where A and B are two formal power series satisfying: +A(t) = +∞ +� +k=0 +aktk, +B(t) = +∞ +� +k=0 +bktk, +a0bk ̸= 0, ∀k ∈ N. +(1.4) +Brenke PSs are reduced to Appell ones when B = exp and they gener- +ated many well-known polynomials in the literature, namely monomials, +Hermite, Laguerre, Gould-Hopper, Generalized Hermite, Generalized Gould- +Hopper, Appell-Dunkl, d-Hermite, d-Laguerre, Bernoulli, Euler, Al-Salam- +Carlitz, Little q-Laguerre, q-Laguerre, discrete q-Hermite PSs,. . . . +These polynomials appear in many areas of mathematics. In particular, +in the framework of the standard orthogonality of polynomials, an exhaustive +classification of all Brenke orthogonal polynomials was established by Chihara +in [16]. Furthermore, Brenke polynomials play a central role in [25], where +the authors determined all MRM-triples associated with Brenke-type gener- +ating functions. Further, the positive approximation process discovered by +Korovkin, a powerful criterion in order to decide whether a given sequence of +positive linear operators on the space of continuous functions converges uni- +formly in this space, plays a central role and arises naturally in many problems +connected with functional analysis, harmonic analysis, measure theory, par- +tial differential equations, and probability theory. The most useful examples +of such operators are Sz´asz operators and many authors obtained a gener- +alization of these operators using Brenke polynomials (see [33, 34] and the +references therein). +This paper is organized as follows. In Section 2, we define the transfer +linear operator between two Brenke polynomials and which is illustrated by +three interesting examples in particular the hypergeometric transformation +and the Dunkl operator on the real line. Then in Section 3, we derive ex- +pansion formulas associated to Brenke polynomials using operational rules +and we give connection, linearization, inversion, duplication, and addition +formulas corresponding to these polynomials. The obtained coefficients are +expressed using generating functions involving the associated transfer lin- +ear operators. Finally, in Section 4, we apply our obtained results to both +Generalized Gould-Hopper PS (GGHPS) and Generalized Hermite PS (or +Szeg¨o-Chihara PS) and we recover many known formulas as special cases. + +4 +H. Chaggara, A. Gahami and N. Ben Romdhane +2. Operators Associated to Brenke PSs +In this section, first, we introduce a transfer operator between two Brenke +families, then we state its expression as an infinite series in the derivative +operator D and the multiplication operator X known as XD-expansion [19]. +Finally, we give some examples. +2.1. Transfer Operator Associated to two Brenke Polynomials +Any Brenke PS {Pn}n≥0 generated by (1.3) is Db-Appell of transfer power +series A, where A and b = (bn) are defined in (1.4). That is, +DbPn+1 = (n + 1)Pn +and +A(Db)(bnxn) = Pn +n! , n = 0, 1, 2, . . ., +(2.1) +where Db denotes the linear operator on P defined by [6]: +Db(1) = 0, Db(xn) = bn−1 +bn +xn−1, n = 1, 2, . . . . +(2.2) +The operator Db is known as the lowering operator for the PS {Pn}n≥0, +however, A is the associated transfer series. (For more details, see [5]). +Let {Pn}n≥0 and {Qn}n≥0 be two Brenke PSs generated respectively +by: +A1(t)B1(xt) = +∞ +� +n=0 +Pn(x) +n! +tn +and +A2(t)B2(xt) = +∞ +� +n=0 +Qn(x) +n! +tn, +(2.3) +where for i = 1, 2, +Ai(t) = +∞ +� +k=0 +a(i) +k tk, +Bi(t) = +∞ +� +k=0 +b(i) +k tk, +a(i) +0 b(i) +k +̸= 0, ∀ k ∈ N. +(2.4) +Then, the corresponding operators Db(1) and Db(2) are related by: +Db(2)θ = θDb(1), +(2.5) +where θ is the bijective linear operator from P onto P (isomorphism of P) +acting on monomials as follows: +θ(xn) = b(2) +n +b(1) +n +xn +and +θ−1(xn) = b(1) +n +b(2) +n +xn. +(2.6) +The linear operator θ can be extended as a transfer operator taking any +formal power series to another formal power series as follows +θ( +� +n≥0 +anxn) = +� +n≥0 +anθ(xn), +(2.7) +and if φ(x) denotes a formal power series then one can easily check that, +θ +� +φ(x) +∞ +� +k=0 +akxk� += +∞ +� +k=0 +akθ(φ(x)xk). +(2.8) +Hence, it is obvious that, +θ(B1(x)) = B2(x). +(2.9) + +Expansion Formulas for Brenke Polynomials +5 +The operator θ will be called the transfer operator from B1 to B2 or transfer +operator from {Pn}n≥0 to {Qn}n≥0. +2.2. XD-Expansion of the Operator θ +Now, recall that any operator L acting on formal power series has the follow- +ing formal expansion, known as XD-expansion (see [19] and the references +therein): +L = +∞ +� +k=0 +Ak(X)Dk, +(2.10) +where D denotes the ordinary differentiation operator and {Ak(x)}k≥0 is a +polynomial sequence such that: +Lext = +∞ +� +k=0 +Ak(x)tkext. +(2.11) +We note that the infinite sum (2.10) is always well defined on P since when +applied to any given polynomial, only a finite number of terms makes a +nonzero contribution. +The XD-expansion of the transfer operator θ is explicitly given by +Proposition 2.1. The operator θ defined by (2.6) has the formal expansion: +θ = +∞ +� +k=0 +φk +k! XkDk, +(2.12) +where +φk = (−1)k +k +� +m=0 +(−k)m +m! +b(2) +m +b(1) +m +. +Proof. By using (2.6) and (2.7) and then substituting L by θ in (2.11), we +obtain +θ(ext) = +∞ +� +k=0 +b(2) +k +b(1) +k +(xt)k +k! += +∞ +� +k=0 +Ak(x)tkext. +Therefore, +∞ +� +k=0 +Ak(x)tk = e−xt +∞ +� +k=0 +b(2) +k +b(1) +k +(xt)k +k! += +∞ +� +k=0 +� +k +� +m=0 +(−1)k (−k)m +m! +b(2) +m +b(1) +m +� +(xt)k +k! +, +which establishes the desired result. +□ +2.3. Examples +Here, we consider three interesting particular cases of the linear operator θ +associated to two Brenke PSs and we essentially give integral representations +for this operator. + +6 +H. Chaggara, A. Gahami and N. Ben Romdhane +2.3.1. Hypergeometric Transformation. Recall first that rFs denotes +the generalized hypergeometric function with r numerator parameters and s +denominator parameters and defined as follows. +rFs +� (αr) +(βs) ; x +� += +∞ +� +k=0 +(α1)k(α2)k · · · (αr)k +(β1)k(β2)k · · · (βs)k +xk +k! , +(2.13) +where the contracted notation (αr) is used to abbreviate the array +{α1, . . . , αr}, and (α)n denotes the Pochhammer symbol: +(α)n = Γ(α + n) +Γ(α) +. +(2.14) +Consider two Brenke PSs {Pn}n≥0 and {Qn}n≥0 generated by (2.3) and (2.4) +and such that the corresponding transfer linear operator θ takes the form: +θ(xn) = b(2) +n +b(1) +n +xn = (γ1)n(γ2)n · · · (γp)n +(δ1)n(δ2)n · · · (δp)n +xn, γi ∈ C, δi ∈ C \ {−N}. +(2.15) +In this case, for the action of the operator θ on hypergeometric functions, we +have the following result. +Proposition 2.2. Let θ be defined by (2.15) with 0 < ℜ(γi) < ℜ(δi), then +for r ≤ s + 1 and |x| < 1, we have +θrFs +� +(αr) +(βs) ; x +� += +p +� +i=1 +1 +β(γi, δi) +� +]0,1[p +p +� +i=1 +uγi−1 +i +(1 − ui)δi−γi−1 +× rFs +� +(αr) +(βs) ; x +p +� +i=1 +ui +� +du1 · · · dup, +(2.16) +where β designates the usual Euler’s Beta function, +β(γ, δ) = +� 1 +0 +tγ−1(1 − t)δ−1dt = Γ(γ)Γ(δ) +Γ(γ + δ) , ℜ(γ), ℜ(δ) > 0. +(2.17) +Proof. From (2.7) and (2.15), we have +θrFs +� +(αr) +(βs) ; x +� += p+rFp+s +� +(αr), (γp) +(βs), (δp) ; x +� +. +Thus, by using the Euler integral representation of generalized hypergeomet- +ric functions, we obtain (see [27, p. 85]): +p+rFp+s +� +(αr), (γp) +(βs), (δp) ; x +� += +Γ(δp) +Γ(γp)Γ(δp − γp) +� 1 +0 +uδp−1 +p +(1 − up)γp−δp−1 +× p+r−1Fp+s−1 +� +(αr), (γp−1) +(βs), (δp−1) ; xup +� +dup, +and after (p − 1) similar applications of the Euler integral representation we +get the desired result. +□ + +Expansion Formulas for Brenke Polynomials +7 +When the operator θ is given by (2.15), the coefficient φk in Proposi- +tion 2.1 is +φk = (−1)k +k +� +m=0 +(−k)m +(γ1)m(γ2)m · · · (γp)m +m!(δ1)m(δ2)m · · · (δp)m += (−1)kip+1Fp +� +−k, γ1, γ2, . . . , γp +δ1, δ2, . . . , δp +; 1 +� +. +Thus the corresponding XD expansion is +θ = +∞ +� +k=0 +(−1)k +k! +p+1Fp +� +−k, γ1, γ2, . . . , γp +δ1, δ2, . . . , δp +; 1 +� +XkDk. +(2.18) +2.3.2. Particular Hypergeometric Transformation. Here, we consider +the special case θ(xn) = (γ)n +(δ)n +xn, δ ̸= 0, −1, −2, . . .. +Proposition 2.3. For any analytic function f on ] − 1, 1[, f(x) = +∞ +� +n=0 +anxn, +we have +θ(f)(x) = +1 +β(γ, δ − γ) +� 1 +0 +tγ−1(1−t)δ−γ−1f(xt)dt, 0 < ℜ(γ) < ℜ(δ). (2.19) +Moreover, the XD-expansion of θ is the following +θ = +∞ +� +k=0 +(−1)k +k! +(δ − γ)k +(γ)k +XkDk. +(2.20) +Proof. By using (2.14) and (2.17), we obtain +(γ)n +(δ)n +xn = Γ(γ + n) +Γ(δ + n) +Γ(δ) +Γ(γ)xn = +1 +β(γ, δ − γ) +� 1 +0 +tγ−1(1 − t)δ−γ−1(xt)ndt. +Thus, substituting the above equation in (2.7), we obtain (2.19) since the +term-by-term integration is justified by the convergence of the series +� +n≥0 +� 1 +0 +��antγ−1(1 − t)δ−γ−1(xt)n�� dt. +For (2.20), we use (2.18) and the Chu-Vandermonde reduction formula: +2F1 +� −k, γ +δ +; 1 +� += (δ − γ)k +(δ)k +, +δ ̸= 0, −1, −2, . . .. +(2.21) +Thus the proof is completed. +□ + +8 +H. Chaggara, A. Gahami and N. Ben Romdhane +2.3.3. Dunkl Operator on the Real Line. The well-known Dunkl oper- +ator, Dµ, associated with the parameter µ on the real line provides a useful +tool in the study of special functions with root systems associated with finite +reflection groups [20] and it is closely related to certain representations of +degenerate affine Heke algebras [26]. This operator is defined by [20]: +Dµ(f)(x) = Df(x) + µ +x(f(x) − f(−x)), +µ ∈ C, +(2.22) +where f is a real variable complex-valued function and D is the differentiation +operator. +The Dunkl operator acts on monomials as follows: +Dµ(xn) = +γµ(n) +γµ(n − 1)xn−1, µ ̸= −1 +2, −3 +2, . . . , +(2.23) +where +γµ(2p + ǫ) = 22p+ǫp!(µ + 1 +2)p+ǫ, +ǫ = 0, 1. +(2.24) +Hence, Dµ is a Db-operator type with bn = +1 +γµ(n), and we have the following +result. +Proposition 2.4. Let µ1 and µ2 be two real numbers satisfying −1 +2 < µ1 < +µ2, and θ given by +θ(xn) = γµ1(n) +γµ2(n)xn. +(2.25) +Then, for any analytic function, f on ] − 1, 1[, the following integral repre- +sentation of θ holds true +θ(f)(x) = +1 +β(µ1 + 1 +2, µ2 − µ1) +� 1 +−1 +f(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt. +(2.26) +Proof. By using (2.14), (2.17) and (2.24) with µ replaced by µ1 and µ2, and +for n = 2p + ǫ, ǫ = 0, 1, we obtain: +γµ1(n) +γµ2(n) = β(µ1 + 1 +2 + p + ǫ, µ2 − µ1) +β(µ1 + 1 +2, µ2 − µ1) +. +(2.27) +Now, with the beta integral representation (2.17), we get +β(µ1 + 1 +2 + p + ǫ, µ2 − µ1) = +� 1 +0 +tµ1+p+ǫ− 1 +2 (1 − t)µ2−µ1−1 dt, +which, after the substitution u2 = t, and the distinction of the two cases +ǫ = 0 and ǫ = 1, becomes +β(µ1 + 1 +2 + p + ǫ, µ2 − µ1) = +� 1 +−1 +un|u|2µ1(1 − µ)µ2−µ1−1(1 + u)µ2−µ1 du. + +Expansion Formulas for Brenke Polynomials +9 +Consequently, this gives +θ(xn) = +1 +β(µ1 + 1 +2, µ2 − µ1) +� 1 +−1 +(xt)n|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt, +(2.28) +and a term-by-term integration achieves the proof. +□ +The following two particular cases are worthy to note. +• For f = expµ1, and according to (2.9), it is clear that +θ(expµ1) = expµ2, +where the generalized exponential function, expµ is defined by [28] +expµ(x) = +∞ +� +n=0 +xn +γµ(n), +µ ̸= −1 +2, −3 +2, −5 +2, . . . . +(2.29) +So, for −1 +2 < µ1 < µ2, and by virtue of (2.26), the following integral +representation of expµ2 holds true [28, Eq. (2.3.4)]: +expµ2(x) = +1 +β(µ1 + 1 +2, µ2 − µ1)× +� 1 +−1 +expµ1(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt. +• For µ1 = 0 and µ2 = µ > 0, the transfer operator θ reduces to the well- +known Dunkl intertwining operator Vµ in the one dimensional case and +(2.26) is nothing else that its corresponding integral representation [20, +Theorem 5.1]: +Vµ(f)(x) = +1 +β( 1 +2, µ) +� 1 +−1 +f(xt)(1 − t)µ−1(1 + t)µ dt. +(2.30) +3. Connection and Linearization Problems +In this section, we investigate connection and linearization formulas for +Brenke PSs. +3.1. Connection Problem +Next, for two polynomial sequences of Brenke type, we state a generating +function for the connection coefficients using the operator θ. This result ap- +pears to be new. Some applications are given. +Theorem 3.1. Let {Pn}n≥0 and {Qn}n≥0 be two polynomial sequences gen- +erated by (2.3) and (2.4) and let θ be the corresponding transfer operator +defined in (2.6). Then the CC in (1.1), (Cm(n))n≥m≥0, are generated by: +A2(t)θ +� tm +A1(t) +� += +∞ +� +n=m +m! +n! Cm(n)tn. +(3.1) + +10 +H. Chaggara, A. Gahami and N. Ben Romdhane +Proof. On one hand, substituting (1.1) in (2.3) and using sum manipulations, +we get: +A2(t)B2(xt) = +∞ +� +n=0 +Qn(x)tn +n! = +∞ +� +n=0 +� +n +� +m=0 +Cm(n)Pm(x) +� +tn +n! += +∞ +� +m=0 +� ∞ +� +n=m +m! +n! Cm(n)tn +� +Pm(x) +m! +. +On the other hand, from (2.8), we have +A2(t)B2(xt) = A2(t)θtB1(xt) = A2(t)θt +� +1 +A1(t) +∞ +� +m=0 +Pm(x)tm +m! +� += +∞ +� +m=0 +A2(t)θt +� tm +A1(t) +� Pm(x) +m! +. +Thus (3.1) follows and the proof is completed. +□ +Some known results can be deduced from Theorem 3.1. Next, we quote +the four important ones of them. +3.1.1. Explicit Expression of the Connection Coefficients. +Write +1 +A1(t) = +∞ +� +n=0 +�a(1) +n tn, then +θt +� tm +A1(t) +� += +∞ +� +n=0 +b(2) +n+m +b(1) +n+m +�a(1) +n tn+m. +By virtue of (3.1), we get: +∞ +� +n=m +m! +n! Cm(n)tn = +� ∞ +� +n=0 +a(2) +n tn +� � ∞ +� +n=0 +b(2) +n+m +b(1) +n+m +�a(1) +n tn+m +� += tm +∞ +� +n=0 +� n +� +k=0 +a(2) +k +b(2) +n+m−k +b(1) +n+m−k +�a(1) +n−k +� +tn += +∞ +� +n=m +�n−m +� +k=0 +b(2) +n−k +b(1) +n−k +a(2) +k �a(1) +n−m−k +� +tn. +Thus, +Cm(n) = n! +m! +n−m +� +k=0 +b(2) +n−k +b(1) +n−k +a(2) +k �a(1) +n−m−k, +m = 0, . . . , n. +(3.2) +In particular, we can deduce the explicit expansion and the inversion formula +for any Brenke PS {Pn}n≥0 generated by (1.3): +Pn(x) +n! += +n +� +m=0 +bman−mxm, +and +bnxn = +n +� +m=0 +�an−m +Pm(x) +m! +. +(3.3) + +Expansion Formulas for Brenke Polynomials +11 +3.1.2. Connection between two Db-Appell PSs. If B1 = B2, in (2.3), +then by using (2.6), we obtain that the expression (3.1) takes the following +simpler form [11]. +A2(t) +A1(t) = +∞ +� +n=m +m! +n! Cm(n)tn−m. +(3.4) +3.1.3. Addition and Convolution Type Formulas. The Brenke PS {Pn}n≥0 +generated by (1.3) possesses the following generalized addition formula and +convolution type relation: +T b +yPn(x) = +n +� +m=0 +n! +m!bn−myn−mPm(x), +and +A(Db)T b +yPn(x) = +n +� +m=0 +�n +m +� +Pn−m(y)Pm(x), +where T b +y = B(yDb) designates the generalized translation operator satisfying +T b +y(B(xt) = B(yt)B(xt). +In fact, for the addition formula, we remark that the PS, {T b +yPn(x)}n≥0, +is generated by: +B(yt)A(t)B(xt) = +∞ +� +n=0 +T b +yPn(x) +n! +tn, +then we apply (3.4) with A2(t) = B(yt)A(t) and A1(t) = A(t), to obtain +Cm(n) = n! +m!bn−myn−m. +For the convolution type relation, we apply the operator A(Db) to each +member of the addition formula and we use (2.1). We have +A(Db)T b +yPn(x) = +n +� +m=0 +n! +m!(n − m)!A(Db)((n − m)!bn−myn−m)Pm(x) += +n +� +m=0 +�n +m +� +Pn−m(y)Pm(x). +3.1.4. Duplication Formula. Brenke PS generated by (1.3) possesses the +following duplication formula [11] +Pn(ax) = +n +� +m=0 +n! +m!amβn−mPm(x), +a ̸= 0, +(3.5) +where A(t) +A(at) = +∞ +� +k=0 +βktk. +In fact, the PS Qn(x) = Pn(ax) is generated by +A(t)B(axt) = +∞ +� +n=0 +Qn(x) +n! +tn. + +12 +H. Chaggara, A. Gahami and N. Ben Romdhane +Thus, by using (2.6) and (2.7), we have θ(f)(x) = f(ax), where f is any +formal power series. +Now, from (3.1), with A1(t) = A2(t) = A(t), it follows immediately that +(at)m A(t) +A(at) = +∞ +� +n=m +m! +n! Cm(n)tn. +3.2. Linearization Problems +In the following result, we provide a generating function for the LC involving +three Brenke polynomials. +Theorem 3.2. Let {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0 be three Brenke PS with +exponential generating functions: +A1(t)B1(xt), A2(t)B2(xt) and A3(t)B3(xt), +(3.6) +where Ai(t) = +∞ +� +k=0 +a(i) +k tk, Bi(t) = +∞ +� +k=0 +b(i) +k tk, a(i) +0 b(i) +k +̸= 0, ∀k ∈ N, i = 1, 2, 3. +Then the LC, {Lij(k)}i,j≥0, k ∈ N, defined in (1.2) are generated by: +A2(s)A3(t) +k! +θ(2) +s θ(3) +t +(θ(1) +s+t)−1 +� (s + t)k +A1(s + t) +� += +� +i,j≥0 +Lij(k) +i!j! sitj +(3.7) +where θ(i)(tn) = n!b(i) +n tn, +i = 1, 2, 3. +We note that θ(i), i = 1, 2, 3, are the transfer operators from {Pn}n≥0, +{Rn}n≥0 and {Sn}n≥0, to the monomials, respectively. +Proof. On one hand, according to (1.2) and with sum manipulation, we ob- +tain: +� +i,j≥0 +Ri(x)Sj(x)si +i! +tj +j! = +� +i,j≥0 +�i+j +� +k=0 +Lij(k)Pk(x) +� +si +i! +tj +j! += +∞ +� +k=0 + +k! +� +i,j≥0 +Lij(k) +i!j! sitj + + Pk(x) +k! +. +(3.8) +On the other hand, by using (2.6), we can easily verify that +θ(2) +s θ(3) +t (θ(1) +s+t)−1B1((s + t)x) = +∞ +� +k=0 +� k +� +l=0 +b(2) +l +b(3) +k−lsltk−l +� +xk, +then +B2(xs)B3(xt) = θ(2) +s θ(3) +t (θ(1) +s+t)−1B1((s + t)x). +Using the generating function of {Pn}n≥0, we obtain +B2(xs)B3(xt) = +∞ +� +k=0 +� +θ(2) +s θ(3) +t +(θ(1) +s+t)−1 (s + t)k +A1(s + t) +� Pk(x) +k! +. + +Expansion Formulas for Brenke Polynomials +13 +Thus +� +i,j≥0 +Ri(x)Sj(x)si +i! +tj +j! = +∞ +� +k=0 +� +A2(s)A3(t)θ(2) +s θ(3) +t (θ(1) +s+t)−1 (s + t)k +A1(s + t) +� Pk(x) +k! +. +Equating the coefficients of Pk(x) in the above equation and (3.8), we obtain +(3.7) which finishes the proof. +□ +Next, as applications, we recover the generating function for the LC of +three Appell polynomials and the explicit expression of the LC associated to +three Brenke PS. +3.2.1. Appell Polynomials. Let {Pn}n≥0, {Rn}n≥0, and {Sn}n≥0, be three +Appell-PS. Then we have B1 = B2 = B3 = exp, and by applying Theo- +rem 3.2, we obtain that the LC in (1.2) are generated by +A2(s)A3(t) +A1(s + t) +(s + t)k +k! += +∞ +� +i,j=0 +Lij(k) +i!j! sitj, +(3.9) +which agrees with Carlitz Formula [10, Eq.(1.9)]. +Moreover, for Pn = Rn = Sn = Hn, where Hn are Hermite polynomials +generated by +e−t2e2xt = +∞ +� +n=0 +Hn(x)tn +n!, +(3.10) +we have A1(t) = A2(t) = A3(t) = A(t) = e−t2, and then +A(s)A(t) +A(s + t) +(s + t)k +k! += 1 +k!e2st(s + t)k. +Thus, using (3.9) we deduce the standard linearization formula for Hermite +PSs +Hi(x)Hj(x) = +min(i,j) +� +k=0 +�i +k +��j +k +� +2kk!Hi+j−2k(x). +(3.11) +This formula is known as Feldheim formula [3]. +3.2.2. Explicit Expression of the LC. For three Brenke PS satisfying +the hypothesises of Theorem 3.2, the LC in (1.2) are given by: +Lij(k) = i!j! +k! +i +� +n=0 +j +� +m=0 +b(2) +n b(3) +m +b(1) +n+m +a(2) +i−na(3) +j−m�a(1) +n+m−k, +k = 0, 1, . . . , i + j, (3.12) +where 1/A1(t) = +∞ +� +n=0 +�a(1) +n tn, and +�a(1) +−n = 0, n = 1, 2, . . . . +Indeed, we have (s + t)k +A1(s + t) = +∞ +� +n=k +�a(1) +n−k(s + t)n, then by using (2.6), we get +θ(2) +s θ(3) +t (θ(1) +s+t)−1 +� (s + t)k +A1(s + t) +� += +∞ +� +n=k +�a(1) +n−k +n +� +m=0 +b(2) +n−mb(3) +m +b(1) +n +tmsn−m. + +14 +H. Chaggara, A. Gahami and N. Ben Romdhane +Thus, with sum manipulations and (3.7), one can easily verify that +� +i,j≥0 +Lij(k) +i!j! sitj = 1 +k! +∞ +� +n,m=0 +� ∞ +� +i=n +a(2) +i−nsi +�  + +∞ +� +j=m +a(3) +j−mtj + + b(2) +n b(3) +m +b(1) +n+m +�a(1) +n+m−k += 1 +k! +� +i,j≥0 +� +i +� +n=0 +j +� +m=0 +b(2) +n b(3) +m +b(1) +n+m +a(2) +i−na(3) +j−m�a(1) +n+m−k +� +sitj, +which leads to (3.12). +We note that this result was first obtained in [11, Corollary 3.3] by using +a method based on the inversion formula. +4. Application to Generalized Gould-Hopper +Polynomial Set +The (d + 1)-fold symmetric generalized Gould-Hopper polynomials, +{Q(d+1) +n +(·, a, µ)}n≥0, are generated by [7]: +eatd+1 expµ(xt) = +∞ +� +n=0 +Q(d+1) +n +(x, a, µ) +n! +tn, a ∈ C, µ ̸= −1 +2, −3 +2, −5 +2, . . . , (4.1) +where a PS {Pn}n≥0 is said to be (d + 1)-fold symmetric, d = 1, 2, . . . , if +Pn +� +e +2iπ +d+1 x +� += e +2inπ +d+1 Pn(x). +These polynomials constitute a unification of many known families such as: +• Classical Hermite PS, Hn(x) = Q(2) +n (2x, −1, 0). +• Gould-Hopper PS, gm +n (x, h) = Q(m) +n +(x, h, 0), (same notations as in [22]). +• Generalized Hermite polynomials [30]: +Hµ +n(x) = Q(2) +n (2x, −1, µ). +(4.2) +The GGHPS are of Brenke type with transfer power series A(t) = exp(atd+1). +They are the only (d + 1)-fold symmetric Dunkl-Appell d-orthogonal PS [7]. +Next, we solve the connection and linearization problems associated to +GGHPS and we treat the particular case of generalized Hermite polynomials. +4.1. Connection Problem +Here, we state the connection formulas for two GGHPS when one or two +of the parameters are different and we give an integral representation of +these coefficients. Moreover, the inversion formula, addition and convolution +relations, and duplication formula are given. +Theorem 4.1. The connection coefficients, Cn−i(d+1)(n), 0 ≤ i ≤ [ +n +d + 1], +between two GGHPS, {Q(d+1) +n +(·, a, µ1)}n≥0 and {Q(d+1) +n +(·, b, µ2)}n≥0 are given + +Expansion Formulas for Brenke Polynomials +15 +by +Cn−i(d+1)(n) = +n! +(n − i(d + 1))! +i +� +k=0 +γµ1(n − k(d + 1)) +γµ2(n − k(d + 1)) +(−a)i−k +(i − k)! +bk +k! . +(4.3) +Proof. By means of (2.6), we have +θ(tme−atd+1) = +∞ +� +n=0 +(−a)n +n! +γµ1(n(d + 1) + m) +γµ2(n(d + 1) + m)tn(d+1)+m. +Thus, by using (3.1), (4.1) and sum manipulation, we obtain +∞ +� +n=m +m! +n! Cm(n)tn = ebtd+1θ(tme−atd+1) += +∞ +� +i=0 +1 +i! +i +� +k=0 +�i +k +�γµ1(k(d + 1) + m) +γµ2(k(d + 1) + m)bi−k(−a)k ti(d+1)+m. +Therefore, for n = i(d + 1) + m, the desired result holds. +□ +We note that for the particular case µ1 = µ2, (4.3) is reduced to +Cn−i(d+1)(n) = +n!(b − a)i +i!(n − i(d + 1))!, +0 ≤ i ≤ +� +n +d + 1 +� +. +For the connection coefficients obtained in Theorem 4.3, we have the following +result. +Proposition 4.2. For µ2 > µ1 > −1 +2, the connection coefficient given by +(4.3) has the following integral representation, +Cn−i(d+1)(n) = n!β−1(µ1 + 1 +2, µ2 − µ1) +i!(n − i(d + 1))! +× +� 1 +−1 +tn−i(d+1)|t|2µ1(b − atd+1)i (1 − t2)µ2−µ1 +1 − t +dt. +Proof. Using Proposition 2.4 with f(x) = xn−k(d+1) and x = 1, we obtain +γµ1(n − k(d + 1)) +γµ2(n − k(d + 1)) = +1 +β(µ1 + 1 +2, µ2 − µ1) +� 1 +−1 +tn−k(d+1)|t|2µ1 (1 − t2)µ2−µ1 +1 − t +dt. +Substituting the above equation in (4.3), we get: +Cn−i(d+1)(n) = +n! +i!(n − i(d + 1))! +1 +β(µ1 + 1 +2, µ2 − µ1)× +� 1 +−1 +tn|t|2µ1 (1 − t2)µ2−µ1 +1 − t +� +i +� +k=0 +�i +k +� +(−a)i−k( +b +td+1 )k +� +dt, +from which the desired result follows. +□ +Next, we give some specific expansion relations associated to GGHPS. + +16 +H. Chaggara, A. Gahami and N. Ben Romdhane +• Explicit and inversion formulas: The following explicit expression and +inversion formula of {Q(d+1) +n +(·, a, µ}n≥0 can be easily derived from (3.3): +Q(d+1) +n +(x, a, µ) = n! +[ +n +d+1 ] +� +k=0 +ak +k!γµ(n − (d + 1)k) xn−(d+1)k, +(4.4) +and +xn +γµ(n) = +[ +n +d+1 ] +� +k=0 +(−a)k +k!(n − (d + 1)k)!Q(d+1) +n−(d+1)k(x, a, µ). +(4.5) +• Addition and convolution relations: +T µ +y Q(d+1) +n +(x, a, µ) = +n +� +k=0 +n!yn−k +k!γµ(n − k)Q(d+1) +k +(x, a, µ), +(4.6) +2 +n +d+1 T µ +y Q(d+1) +n +� +2 +−1 +d+1 x, a, µ +� += +n +� +k=0 +�n +k +� +Q(d+1) +k +(y, a, µ) Q(d+1) +n−k (x, a, µ), (4.7) +where T µ +y = expµ(yDµ). +For µ = 0, this equation is reduced to the well-known Gould- +Hopper convolution type relation [22] and for m = 2, h = −1, we recover +the Runge formula for Hermite polynomials [29] +• Duplication formula: +Q(d+1) +n +(αx, a, µ) = n! +[ +n +d+1 ] +� +k=0 +αn−k(d+1)(1 − αd+1)kak +(n − k(d + 1))!k! +Q(d+1) +n−k(d+1)(x, a, µ), α ̸= 0. +4.2. Linearization Formula +Taking into account the (d + 1)-fold symmetry property of the GGHPS, any +LC Lij(k), in (1.2) vanishes when k ̸= i + j − r(d + 1). Thus, according to +(3.12), the corresponding LC is given by: +Lij(i + j − r(d + 1)) = +i!j! +(i + j − r(d + 1))! +[ +i +d+1 ] +� +n=0 +[ +j +d+1 ] +� +m=0 +an +1am +2 (−a3)r−m−n +n!m!(r − m − n)! × +γµ3(i + j − (m + n)(d + 1)) +γµ1(i − n(d + 1))γµ2(j − r(d + 1)), 0 ≤ r ≤ +� i + j +d + 1 +� +. + +Expansion Formulas for Brenke Polynomials +17 +We remark that there is no difficulty in proving the corresponding formula +for the linearization of any arbitrary number of GGHPSs. We have: +N +� +s=1 +Q(d+1) +is +(x, as, µs) = +[ i1+···+iN +d+1 +] +� +r=0 +i1! · · · iN! +(i1 + · · · + iN − r(d + 1))!× +[ +i1 +d+1 ] +� +s1=0 +· · · +[ iN +d+1 ] +� +sN =0 +as1 +1 · · · asN +N (−aN+1)r−s1−···−sN +s1! · · · sN!(r − s1 − · · · − sN)! × +γµN+1(i1 + · · · + iN − (d + 1)(s1 + · · · + sN)) +γµ1(i1 − (d + 1)s1) · · · γµN(iN − (d + 1)sN) × +Q(d+1) +i1+···+iN −r(d+1)(x, aN+1, µN+1). +4.3. Generalized Hermite Polynomials +The generalized Hermite polynomials, {Hµ +n}n≥0, are introduced by Szeg¨o [30], +then investigated by Chihara in his PhD Thesis [15] and further studied by +many other authors [11,28]. They are generated by: +e−td+1 expµ(2xt) = +∞ +� +n=0 +Hµ +n(x) +n! +tn, µ ̸= −1 +2, −3 +2, −5 +2, . . . . +(4.8) +Proposition 4.3. The following connection relation holds: +�Hµ2 +n (x) = +[n/2] +� +k=0 +(−1)k 4k +k! +(µ2 − µ1)k �Hµ1 +n−2k(x), µ2 > µ1 > −1 +2, +(4.9) +where { �Hµi +n }n, i = 1, 2 are the normalized generalized Hermite PS given by +�Hµi +n (x) = γµi(n) +n![ n +2 ]! Hµi +n (x). +Proof. From what has already been stated, the connection coefficients from +{Hµ2 +n }n to {Hµ1 +n }n are generated by +e−t2θ(tmet2) = +∞ +� +n=m +m! +n! Cm(n)tn, +where θ is the operator defined in (2.25). +Making use of the θ-integral representation (2.26), intercalate 0 in the interval +of integration, we get: +∞ +� +n=m +m! +n! Cm(n)tn = +tme−t2 +β(µ1 + 1 +2, µ2 − µ1)× +� 1 +0 +et2s2 +sm+2µ1 +(1 − s2)µ1−µ2 +� +1 +1 − s + (−1)m +1 + s +� +ds. + +18 +H. Chaggara, A. Gahami and N. Ben Romdhane +It follows, for m even and after substituting u = s2, that +∞ +� +n=m +m! +n! Cm(n)tn = +tme−t2 +β(µ1 + 1 +2, µ2 − µ1) +� 1 +0 +eut2u +m−1 +2 ++µ1(1 − u)µ2−µ1−1du += +∞ +� +n=0 +(−1)n +n! +β(µ1 + m+1 +2 , µ2 − µ1 + n) +β(µ1 + 1 +2, µ2 − µ1) +tm+2n, +where the term by term integration is justified by the same argument as in +the proof of Proposition 2.3. +On the other hand, we have +β(µ1 + 1 +2 + k, µ2 − µ1 + n) +β(µ1 + 1 +2, µ2 − µ1) += Γ(µ1 + 1 +2 + k)Γ(µ2 − µ1 + n)Γ(µ2 + 1 +2) +Γ(µ2 + n + k + 1 +2)Γ(µ1 + 1 +2)Γ(µ2 − µ1) += γµ1(2k) +22kk! +22(k+n)(k + n)! +γµ2(2(k + n)) (µ2 − µ1)n += +γµ1(m) +γµ2(m + 2n) +4n([m/2] + n)! +[m/2]! +(µ2 − µ1)n. +Thus, by virtue of (2.17) and (2.27), we obtain +∞ +� +n=m +m! +n! Cm(n)tn = +∞ +� +n=0 +(−1)n +n! +γµ1(m)4n([ m +2 ] + n)! +γµ2(m + 2n)[ m +2 ]! (µ2 − µ1)ntm+2n. +For m odd, similar computations lead to +∞ +� +n=m +m! +n! Cm(n)tn = +∞ +� +n=0 +(−1)n +n! +γµ1(m) +γµ2(m + 2n) +4n([ m +2 ] + n)! +[ m +2 ]! +(µ2 − µ1)ntm+2n. +Therefore, for m = 0, 1, 2, 3, . . ., we have: +∞ +� +n=m +m! +n! Cm(n)tn = +∞ +� +n=0 +(−1)n +n! +γµ1(m) +γµ2(m + 2n) +4n([ m +2 ] + n)! +[ m +2 ]! +(µ2 − µ1)ntm+2n, +Thus, for k = 0, 1, 2, . . ., [n +2 ], we get +Cn−2k(n) = (−1)k +k! +n! +(n − 2k)! +4k[ n +2 ]! +[ n +2 − k]! +γµ1(n − 2k) +γµ2(n) +(µ2 − µ1)k. +□ +We note that the connection coefficients in (4.9) alternate in sign and +that this relation was already derived in [14], where the authors used a linear +computer algebra approach based on the Zeilberger’s algorithm. +References +[1] Abd-Elhameed, W., Badah, B.M.: New approaches to the general linearization +problem of Jacobi polynomials based on moments and connection formulas. +Mathematics 9, 1–28 (2021) + +Expansion Formulas for Brenke Polynomials +19 +[2] Area,, I., Godoy, E., Rodal, J., Ronveaux, A., Zarzo A.: Bivariate Krawtchouk +polynomials: Inversion and connection problems with the NAVIMA algorithm. +J. Comput. Appl. Math. 284, 50–57 (2015) +[3] Askey, R.: Orthogonal Polynomials and Special Functions, CBMS-NSF Re- +gional Conference Series in Appl. Math., vol. 21. SIAM, Philadelphia, Pynn- +sylvania (1975) +[4] Askey, R., Gasper, G.: Jacobi polynomial expansions of Jacobi polynomials +with non-negative coefficients. Proc. Camb. Phil. Soc. 70, 243–255 (1971) +[5] Ben Cheikh, Y.: Some results on quasi-monomiality. Appl. Math. Comput. +141, 63–76 (2003) +[6] Ben Cheikh, Y., Chaggara, H.: Connection coefficients between Boas–Buck +polynomial set. J. Math. Anal. Appl. 319, 665–689 (2005) +[7] Ben Cheikh, Y., Gaied, M.: Dunkl-Appell d-orthogonal polynomials. Integral +Transforms Spec. Funct. 18, 581–597 (2007) +[8] Ben Romdhane, N.: A general theorem on inversion problems for polynomial +sets. Med. J. Math. 13, 2783–2793 (2016) +[9] Brenke, W.: On generating functions of polynomial systems. Amer. Math. +Monthly 52, 297–301 (1945) +[10] Carlitz, L.: Products of Appell polynomials. Collect. Math. 112, 133–138 +(1963) +[11] Chaggara, H.: Operational rules and a generalized Hermite polynomials. J. +Math. Anal. Appl. 332, 11–21 (2007) +[12] Chaggara, H.: Quasi monomialty and linearization coefficients for Sheffer poly- +nomial sets. Difference Equations, Special Functions, And Orthogonal Polyno- +mials pp. 90–99 (2007) +[13] Chaggara, H., Mabrouk, M.: Linearization coefficients for some basic hyperge- +ometric polynomials. J. Mathematics Volume 2022, 12 pages +[14] Chaggara, H., Koepf, W.: On linearization and connection coefficients for gen- +eralized Hermite polynomials. J. Math. Anal. Appl. 236, 65–73 (2011) +[15] Chihara, T.: Generalized Hermite polynomials. Ph.D. thesis, Purdue (1955) +[16] Chihara, T.: Orthogonal polynomials with Brenke type generating functions. +Duke Math. J. 35, 505–517 (1968) +[17] Chihara, T.: An Introduction to Orthogonal Polynomials. Gordon and Breach, +New York, London, Paris (1978) +[18] Dehesa, J., Martinez-Finkelshtein, A., S´anchez-Ruiz, J.: Quantum information +entropies and orthogonal polynomials. J. Comput. Appl. Math. 133, 23–46 +(2001) +[19] Di Bucchianico, A., Loeb, D.E.: Operator expansion in the derivative and mul- +tiplication by x. Integral Transforms Spec. Funct. 4, 49–68 (1996) +[20] Dunkl, C.: Integral kernels with reflection group invariance. Canad. J. Math. +43, 1213–1227 (1991) +[21] Gasper, G.: Linearization of the product of Jacobi polynomials. Canad. J. +Math. 22, 171–175 (1970) +[22] Gould, H., Hopper, A.T.: Operational formulas connected with two generaliza- +tions of Hermite polynomials. Duke Math. J. 29, 51–63 (1962) + +20 +H. Chaggara, A. Gahami and N. Ben Romdhane +[23] Koornwinder, T.: Compact quantum groups and q-special functions 311, 46– +128 (1994) +[24] Maroni, P., Da Rocha, Z.: Connection coefficients for orthogonal polynomials: +symbolic computations, verification, and demonstrations in the Mathematica +language. Numer. Algor. 63, 507–520 (2013) +[25] Asai, N., Kubo, I., Kuo, H.H.: The Brenke type generating functions and ex- +plicit forms of MRM-triples by means of q-hypergeometric series. Inf. Dimens. +Anal. Quantum Probab. Related Topics, 16 27 pages (2013). +[26] Opdam, E.M.: Dunkl operators, Bessel functions and the discriminant of a +finite Coxeter group. Compos. Math., 85, 333–373 (1993). +[27] Rainville, E.: Special Functions. The Macmillan Company, New York (1960) +[28] Rosenblum, M.: Generalized Hermite polynomials and the Bose-like oscillator +calculus. Oper. Theory Adv. Appl. 73, 369–396 (1994) +[29] Runge, C.: ¨Uber eine besondere art von intergralgleichungen. Math. Ann. 75, +130–132 (1914) +[30] Szeg¨o, G. : Orthogonal polynomials, 4rd edn. Amer. Math. Soc. Colloq. Vol. +23, Amer. Math. Soc, New York (1975) +[31] Szwarc, R.: Convolution structures associated with orthogonal polynomials. J. +Math. Anal. Appl. 170, 158–170 (1992) +[32] Tcheutia,, D., Foupouagnigni, M., Koepf, W., Sadjang, N.N.: Coefficients of +multiplication formulas for classical orthogonal polynomials. Ramanujan J. pp. +1–35 (2015) +[33] Varma, S., Sezgin, S., ´I¸c¨oz, G.: Generalization of Szasz operators involving +Brenke type polynomials. Comput. Math. Appl. 64, 121–127 (2012) +[34] Wani, S., Mursaleen, M., Nisar, K.S.: Certain approximation properties of +Brenke polynomials using Jakimovski-Leviatan operators. J. Inequal. Appl. +64, 1–16 (2021) +Hamza Chaggara +Mathematics Department, College of Science, King Khalid University, Abha, King- +dom of Saudi Arabia/D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sci- +ences et de la Technologie, Sousse University, Tunisia. +e-mail: hshaggara@kku.edu.sa / hamza.chaggara@ipeim.rnu.tn +Abdelhamid Gahami +D´epartement de Math´ematiques, Institut Pr´eparatoire aux ´Etudes d’Ing´enieur, Sfax +University, Tunisia. +e-mail: aelgahami@yahoo.fr +Neila Ben Romdhane +D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sciences et de la Technologie, +Sousse University, Tunisia. +e-mail: neila.benromdhane@ipeim.rnu.tn + diff --git a/D9FRT4oBgHgl3EQfxziA/content/tmp_files/load_file.txt b/D9FRT4oBgHgl3EQfxziA/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f8bb6d5d710df2046158d77d639b21439f0e556f --- /dev/null +++ b/D9FRT4oBgHgl3EQfxziA/content/tmp_files/load_file.txt @@ -0,0 +1,889 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf,len=888 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='13643v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='CA] 31 Jan 2023 Some Expansion Formulas for Brenke Polynomial Sets Hamza Chaggara, Abdelhamid Gahami and Neila Ben Romdhane Last Revised: February 1, 2023 Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In this paper, we derive some explicit expansion formulas associated to Brenke polynomials using operational rules based on their corresponding generating functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The obtained coefficients are ex- pressed either in terms of finite double sums or finite sums or sometimes in closed hypergeometric terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The derived results are applied to Gen- eralized Gould-Hopper polynomials and Generalized Hermite polynomi- als introduced by Szeg¨o and Chihara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Some well-known duplication and convolution formulas are deduced as particular cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Mathematics Subject Classification (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 33C45, 41A10, 41A58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Keywords.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Brenke polynomials, Connection coefficients, Generalized Gould-Hopper polynomials, Generalized Hermite polynomials, Generat- ing functions, Linearization coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Contents 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Introduction 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Operators Associated to Brenke PSs 4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Transfer Operator Associated to two Brenke Polynomials 4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' XD-Expansion of the Operator θ 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Examples 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Hypergeometric Transformation 6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Particular Hypergeometric Transformation 7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Dunkl Operator on the Real Line 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection and Linearization Problems 9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection Problem 9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Explicit Expression of the Connection Coefficients 10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection between two Db-Appell PSs 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Addition and Convolution Type Formulas 11 2 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Duplication Formula 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Linearization Problems 12 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appell Polynomials 13 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Explicit Expression of the LC 13 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Application to Generalized Gould-Hopper Polynomial Set 14 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection Problem 14 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Linearization Formula 16 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Generalized Hermite Polynomials 17 References 18 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Introduction Let P be the vector space of polynomials with coefficients in C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' A polynomial sequence in P is called polynomial set (PS for short) if deg Pn = n, for all n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The connection and linearization problems are defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Given two PSs {Pn}n≥0 and {Qn}n≥0, the so-called connection problem be- tween them asks to find the coefficients Cm(n), called connection coefficients CC, in the expression Qn(x) = n � m=0 Cm(n)Pm(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) The particular cases Qn(x) = xn and Qn(x) = Pn(ax), a ̸= 0, in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) are known, respectively, as the inversion formula for {Pn}n≥0 and the duplication or multiplication formula associated with {Pn}n≥0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Given three PSs {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0, then for Qi+j(x) = Ri(x)Sj(x) in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) we are faced to the general linearization problem Ri(x)Sj(x) = i+j � k=0 Lij(k)Pk(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) The coefficients Lij(k) are called linearization coefficients LC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The particular case of this problem, Pn = Rn = Sn, is known as the standard linearization problem or Clebsch-Gordan-type problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The computation and the positivity of the aforementioned coefficients play important roles in many situations of pure and applied mathemat- ics ranging from combinatorics and statistical mechanics to group theory [4, 21, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Therefore, different methods have been developed in the litera- ture and several sufficient conditions for the sign properties to hold have been derived in [3, 31], using for this purpose specific properties of the in- volved polynomials such as orthogonality, generating functions, inversion for- mulas, hypergeometric expansion formulas, recurrence relations, algorithmic approaches, inverse relations,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [1, 2, 8, 13, 24, 32]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In particular, a general method, based on operational rules and generating functions, was Expansion Formulas for Brenke Polynomials 3 developed for polynomial sets with equivalent lowering operators and with Boas-Buck generating functions [6,12,14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In this paper, we deeply discuss both the connection and the lineariza- tion problems when the involved polynomials are of Brenke type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' These poly- nomials are defined by their exponential generating functions as follows [9,17] A(t)B(xt) = ∞ � n=0 Pn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) where A and B are two formal power series satisfying: A(t) = ∞ � k=0 aktk, B(t) = ∞ � k=0 bktk, a0bk ̸= 0, ∀k ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) Brenke PSs are reduced to Appell ones when B = exp and they gener- ated many well-known polynomials in the literature, namely monomials, Hermite, Laguerre, Gould-Hopper, Generalized Hermite, Generalized Gould- Hopper, Appell-Dunkl, d-Hermite, d-Laguerre, Bernoulli, Euler, Al-Salam- Carlitz, Little q-Laguerre, q-Laguerre, discrete q-Hermite PSs,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' These polynomials appear in many areas of mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In particular, in the framework of the standard orthogonality of polynomials, an exhaustive classification of all Brenke orthogonal polynomials was established by Chihara in [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Furthermore, Brenke polynomials play a central role in [25], where the authors determined all MRM-triples associated with Brenke-type gener- ating functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Further, the positive approximation process discovered by Korovkin, a powerful criterion in order to decide whether a given sequence of positive linear operators on the space of continuous functions converges uni- formly in this space, plays a central role and arises naturally in many problems connected with functional analysis, harmonic analysis, measure theory, par- tial differential equations, and probability theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The most useful examples of such operators are Sz´asz operators and many authors obtained a gener- alization of these operators using Brenke polynomials (see [33, 34] and the references therein).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' This paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In Section 2, we define the transfer linear operator between two Brenke polynomials and which is illustrated by three interesting examples in particular the hypergeometric transformation and the Dunkl operator on the real line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Then in Section 3, we derive ex- pansion formulas associated to Brenke polynomials using operational rules and we give connection, linearization, inversion, duplication, and addition formulas corresponding to these polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The obtained coefficients are expressed using generating functions involving the associated transfer lin- ear operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Finally, in Section 4, we apply our obtained results to both Generalized Gould-Hopper PS (GGHPS) and Generalized Hermite PS (or Szeg¨o-Chihara PS) and we recover many known formulas as special cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Operators Associated to Brenke PSs In this section, first, we introduce a transfer operator between two Brenke families, then we state its expression as an infinite series in the derivative operator D and the multiplication operator X known as XD-expansion [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Finally, we give some examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Transfer Operator Associated to two Brenke Polynomials Any Brenke PS {Pn}n≥0 generated by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) is Db-Appell of transfer power series A, where A and b = (bn) are defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' That is, DbPn+1 = (n + 1)Pn and A(Db)(bnxn) = Pn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , n = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) where Db denotes the linear operator on P defined by [6]: Db(1) = 0, Db(xn) = bn−1 bn xn−1, n = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) The operator Db is known as the lowering operator for the PS {Pn}n≥0, however, A is the associated transfer series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (For more details, see [5]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let {Pn}n≥0 and {Qn}n≥0 be two Brenke PSs generated respectively by: A1(t)B1(xt) = ∞ � n=0 Pn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn and A2(t)B2(xt) = ∞ � n=0 Qn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) where for i = 1, 2, Ai(t) = ∞ � k=0 a(i) k tk, Bi(t) = ∞ � k=0 b(i) k tk, a(i) 0 b(i) k ̸= 0, ∀ k ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) Then, the corresponding operators Db(1) and Db(2) are related by: Db(2)θ = θDb(1), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='5) where θ is the bijective linear operator from P onto P (isomorphism of P) acting on monomials as follows: θ(xn) = b(2) n b(1) n xn and θ−1(xn) = b(1) n b(2) n xn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) The linear operator θ can be extended as a transfer operator taking any formal power series to another formal power series as follows θ( � n≥0 anxn) = � n≥0 anθ(xn), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) and if φ(x) denotes a formal power series then one can easily check that, θ � φ(x) ∞ � k=0 akxk� = ∞ � k=0 akθ(φ(x)xk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='8) Hence, it is obvious that, θ(B1(x)) = B2(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9) Expansion Formulas for Brenke Polynomials 5 The operator θ will be called the transfer operator from B1 to B2 or transfer operator from {Pn}n≥0 to {Qn}n≥0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' XD-Expansion of the Operator θ Now, recall that any operator L acting on formal power series has the follow- ing formal expansion, known as XD-expansion (see [19] and the references therein): L = ∞ � k=0 Ak(X)Dk, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='10) where D denotes the ordinary differentiation operator and {Ak(x)}k≥0 is a polynomial sequence such that: Lext = ∞ � k=0 Ak(x)tkext.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='11) We note that the infinite sum (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='10) is always well defined on P since when applied to any given polynomial, only a finite number of terms makes a nonzero contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The XD-expansion of the transfer operator θ is explicitly given by Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The operator θ defined by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) has the formal expansion: θ = ∞ � k=0 φk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' XkDk, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='12) where φk = (−1)k k � m=0 (−k)m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' b(2) m b(1) m .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' By using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) and then substituting L by θ in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='11), we obtain θ(ext) = ∞ � k=0 b(2) k b(1) k (xt)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � k=0 Ak(x)tkext.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Therefore, ∞ � k=0 Ak(x)tk = e−xt ∞ � k=0 b(2) k b(1) k (xt)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � k=0 � k � m=0 (−1)k (−k)m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' b(2) m b(1) m � (xt)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , which establishes the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Examples Here, we consider three interesting particular cases of the linear operator θ associated to two Brenke PSs and we essentially give integral representations for this operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 6 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Hypergeometric Transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Recall first that rFs denotes the generalized hypergeometric function with r numerator parameters and s denominator parameters and defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' rFs � (αr) (βs) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x � = ∞ � k=0 (α1)k(α2)k · · · (αr)k (β1)k(β2)k · · · (βs)k xk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='13) where the contracted notation (αr) is used to abbreviate the array {α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , αr}, and (α)n denotes the Pochhammer symbol: (α)n = Γ(α + n) Γ(α) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='14) Consider two Brenke PSs {Pn}n≥0 and {Qn}n≥0 generated by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) and such that the corresponding transfer linear operator θ takes the form: θ(xn) = b(2) n b(1) n xn = (γ1)n(γ2)n · · · (γp)n (δ1)n(δ2)n · · · (δp)n xn, γi ∈ C, δi ∈ C \\ {−N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='15) In this case, for the action of the operator θ on hypergeometric functions, we have the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let θ be defined by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='15) with 0 < ℜ(γi) < ℜ(δi), then for r ≤ s + 1 and |x| < 1, we have θrFs � (αr) (βs) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x � = p � i=1 1 β(γi, δi) � ]0,1[p p � i=1 uγi−1 i (1 − ui)δi−γi−1 × rFs � (αr) (βs) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x p � i=1 ui � du1 · · · dup, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='16) where β designates the usual Euler’s Beta function, β(γ, δ) = � 1 0 tγ−1(1 − t)δ−1dt = Γ(γ)Γ(δ) Γ(γ + δ) , ℜ(γ), ℜ(δ) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='17) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' From (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='15), we have θrFs � (αr) (βs) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x � = p+rFp+s � (αr), (γp) (βs), (δp) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, by using the Euler integral representation of generalized hypergeomet- ric functions, we obtain (see [27, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 85]): p+rFp+s � (αr), (γp) (βs), (δp) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' x � = Γ(δp) Γ(γp)Γ(δp − γp) � 1 0 uδp−1 p (1 − up)γp−δp−1 × p+r−1Fp+s−1 � (αr), (γp−1) (βs), (δp−1) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' xup � dup, and after (p − 1) similar applications of the Euler integral representation we get the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ Expansion Formulas for Brenke Polynomials 7 When the operator θ is given by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='15), the coefficient φk in Proposi- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1 is φk = (−1)k k � m=0 (−k)m (γ1)m(γ2)m · · · (γp)m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (δ1)m(δ2)m · · · (δp)m = (−1)kip+1Fp � −k, γ1, γ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , γp δ1, δ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , δp ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus the corresponding XD expansion is θ = ∞ � k=0 (−1)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' p+1Fp � −k, γ1, γ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , γp δ1, δ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , δp ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 1 � XkDk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='18) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Particular Hypergeometric Transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Here, we consider the special case θ(xn) = (γ)n (δ)n xn, δ ̸= 0, −1, −2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='. Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For any analytic function f on ] − 1, 1[, f(x) = ∞ � n=0 anxn, we have θ(f)(x) = 1 β(γ, δ − γ) � 1 0 tγ−1(1−t)δ−γ−1f(xt)dt, 0 < ℜ(γ) < ℜ(δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='19) Moreover, the XD-expansion of θ is the following θ = ∞ � k=0 (−1)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (δ − γ)k (γ)k XkDk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='20) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' By using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='14) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='17), we obtain (γ)n (δ)n xn = Γ(γ + n) Γ(δ + n) Γ(δ) Γ(γ)xn = 1 β(γ, δ − γ) � 1 0 tγ−1(1 − t)δ−γ−1(xt)ndt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, substituting the above equation in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7), we obtain (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='19) since the term-by-term integration is justified by the convergence of the series � n≥0 � 1 0 ��antγ−1(1 − t)δ−γ−1(xt)n�� dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='20), we use (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='18) and the Chu-Vandermonde reduction formula: 2F1 � −k, γ δ ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 1 � = (δ − γ)k (δ)k , δ ̸= 0, −1, −2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='. (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='21) Thus the proof is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ 8 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Dunkl Operator on the Real Line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The well-known Dunkl oper- ator, Dµ, associated with the parameter µ on the real line provides a useful tool in the study of special functions with root systems associated with finite reflection groups [20] and it is closely related to certain representations of degenerate affine Heke algebras [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' This operator is defined by [20]: Dµ(f)(x) = Df(x) + µ x(f(x) − f(−x)), µ ∈ C, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='22) where f is a real variable complex-valued function and D is the differentiation operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The Dunkl operator acts on monomials as follows: Dµ(xn) = γµ(n) γµ(n − 1)xn−1, µ ̸= −1 2, −3 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='23) where γµ(2p + ǫ) = 22p+ǫp!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ + 1 2)p+ǫ, ǫ = 0, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='24) Hence, Dµ is a Db-operator type with bn = 1 γµ(n), and we have the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let µ1 and µ2 be two real numbers satisfying −1 2 < µ1 < µ2, and θ given by θ(xn) = γµ1(n) γµ2(n)xn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='25) Then, for any analytic function, f on ] − 1, 1[, the following integral repre- sentation of θ holds true θ(f)(x) = 1 β(µ1 + 1 2, µ2 − µ1) � 1 −1 f(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='26) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' By using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='14), (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='17) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='24) with µ replaced by µ1 and µ2, and for n = 2p + ǫ, ǫ = 0, 1, we obtain: γµ1(n) γµ2(n) = β(µ1 + 1 2 + p + ǫ, µ2 − µ1) β(µ1 + 1 2, µ2 − µ1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='27) Now, with the beta integral representation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='17), we get β(µ1 + 1 2 + p + ǫ, µ2 − µ1) = � 1 0 tµ1+p+ǫ− 1 2 (1 − t)µ2−µ1−1 dt, which, after the substitution u2 = t, and the distinction of the two cases ǫ = 0 and ǫ = 1, becomes β(µ1 + 1 2 + p + ǫ, µ2 − µ1) = � 1 −1 un|u|2µ1(1 − µ)µ2−µ1−1(1 + u)µ2−µ1 du.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Expansion Formulas for Brenke Polynomials 9 Consequently, this gives θ(xn) = 1 β(µ1 + 1 2, µ2 − µ1) � 1 −1 (xt)n|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='28) and a term-by-term integration achieves the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ The following two particular cases are worthy to note.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For f = expµ1, and according to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9), it is clear that θ(expµ1) = expµ2, where the generalized exponential function, expµ is defined by [28] expµ(x) = ∞ � n=0 xn γµ(n), µ ̸= −1 2, −3 2, −5 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='29) So, for −1 2 < µ1 < µ2, and by virtue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='26), the following integral representation of expµ2 holds true [28, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4)]: expµ2(x) = 1 β(µ1 + 1 2, µ2 − µ1)× � 1 −1 expµ1(xt)|t|2µ1(1 − t)µ2−µ1−1(1 + t)µ2−µ1 dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For µ1 = 0 and µ2 = µ > 0, the transfer operator θ reduces to the well- known Dunkl intertwining operator Vµ in the one dimensional case and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='26) is nothing else that its corresponding integral representation [20, Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1]: Vµ(f)(x) = 1 β( 1 2, µ) � 1 −1 f(xt)(1 − t)µ−1(1 + t)µ dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='30) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection and Linearization Problems In this section, we investigate connection and linearization formulas for Brenke PSs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection Problem Next, for two polynomial sequences of Brenke type, we state a generating function for the connection coefficients using the operator θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' This result ap- pears to be new.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Some applications are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let {Pn}n≥0 and {Qn}n≥0 be two polynomial sequences gen- erated by (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) and let θ be the corresponding transfer operator defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Then the CC in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1), (Cm(n))n≥m≥0, are generated by: A2(t)θ � tm A1(t) � = ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) 10 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' On one hand, substituting (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) and using sum manipulations, we get: A2(t)B2(xt) = ∞ � n=0 Qn(x)tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � n=0 � n � m=0 Cm(n)Pm(x) � tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � m=0 � ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn � Pm(x) m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' On the other hand, from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='8), we have A2(t)B2(xt) = A2(t)θtB1(xt) = A2(t)θt � 1 A1(t) ∞ � m=0 Pm(x)tm m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' � = ∞ � m=0 A2(t)θt � tm A1(t) � Pm(x) m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) follows and the proof is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ Some known results can be deduced from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Next, we quote the four important ones of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Explicit Expression of the Connection Coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Write 1 A1(t) = ∞ � n=0 �a(1) n tn, then θt � tm A1(t) � = ∞ � n=0 b(2) n+m b(1) n+m �a(1) n tn+m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' By virtue of (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1), we get: ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = � ∞ � n=0 a(2) n tn � � ∞ � n=0 b(2) n+m b(1) n+m �a(1) n tn+m � = tm ∞ � n=0 � n � k=0 a(2) k b(2) n+m−k b(1) n+m−k �a(1) n−k � tn = ∞ � n=m �n−m � k=0 b(2) n−k b(1) n−k a(2) k �a(1) n−m−k � tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, Cm(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n−m � k=0 b(2) n−k b(1) n−k a(2) k �a(1) n−m−k, m = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) In particular, we can deduce the explicit expansion and the inversion formula for any Brenke PS {Pn}n≥0 generated by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3): Pn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = n � m=0 bman−mxm, and bnxn = n � m=0 �an−m Pm(x) m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) Expansion Formulas for Brenke Polynomials 11 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection between two Db-Appell PSs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' If B1 = B2, in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3), then by using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6), we obtain that the expression (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) takes the following simpler form [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' A2(t) A1(t) = ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn−m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Addition and Convolution Type Formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The Brenke PS {Pn}n≥0 generated by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) possesses the following generalized addition formula and convolution type relation: T b yPn(x) = n � m=0 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='bn−myn−mPm(x), and A(Db)T b yPn(x) = n � m=0 �n m � Pn−m(y)Pm(x), where T b y = B(yDb) designates the generalized translation operator satisfying T b y(B(xt) = B(yt)B(xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In fact, for the addition formula, we remark that the PS, {T b yPn(x)}n≥0, is generated by: B(yt)A(t)B(xt) = ∞ � n=0 T b yPn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn, then we apply (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) with A2(t) = B(yt)A(t) and A1(t) = A(t), to obtain Cm(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='bn−myn−m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For the convolution type relation, we apply the operator A(Db) to each member of the addition formula and we use (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' We have A(Db)T b yPn(x) = n � m=0 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − m)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='A(Db)((n − m)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='bn−myn−m)Pm(x) = n � m=0 �n m � Pn−m(y)Pm(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Duplication Formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Brenke PS generated by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) possesses the following duplication formula [11] Pn(ax) = n � m=0 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='amβn−mPm(x), a ̸= 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='5) where A(t) A(at) = ∞ � k=0 βktk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' In fact, the PS Qn(x) = Pn(ax) is generated by A(t)B(axt) = ∞ � n=0 Qn(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 12 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane Thus, by using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7), we have θ(f)(x) = f(ax), where f is any formal power series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Now, from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1), with A1(t) = A2(t) = A(t), it follows immediately that (at)m A(t) A(at) = ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Linearization Problems In the following result, we provide a generating function for the LC involving three Brenke polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0 be three Brenke PS with exponential generating functions: A1(t)B1(xt), A2(t)B2(xt) and A3(t)B3(xt), (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) where Ai(t) = ∞ � k=0 a(i) k tk, Bi(t) = ∞ � k=0 b(i) k tk, a(i) 0 b(i) k ̸= 0, ∀k ∈ N, i = 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Then the LC, {Lij(k)}i,j≥0, k ∈ N, defined in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) are generated by: A2(s)A3(t) k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' θ(2) s θ(3) t (θ(1) s+t)−1 � (s + t)k A1(s + t) � = � i,j≥0 Lij(k) i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' sitj (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) where θ(i)(tn) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='b(i) n tn, i = 1, 2, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' We note that θ(i), i = 1, 2, 3, are the transfer operators from {Pn}n≥0, {Rn}n≥0 and {Sn}n≥0, to the monomials, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' On one hand, according to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) and with sum manipulation, we ob- tain: � i,j≥0 Ri(x)Sj(x)si i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tj j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = � i,j≥0 �i+j � k=0 Lij(k)Pk(x) � si i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tj j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � k=0 \uf8eb \uf8edk!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' � i,j≥0 Lij(k) i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' sitj \uf8f6 \uf8f8 Pk(x) k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='8) On the other hand, by using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6), we can easily verify that θ(2) s θ(3) t (θ(1) s+t)−1B1((s + t)x) = ∞ � k=0 � k � l=0 b(2) l b(3) k−lsltk−l � xk, then B2(xs)B3(xt) = θ(2) s θ(3) t (θ(1) s+t)−1B1((s + t)x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Using the generating function of {Pn}n≥0, we obtain B2(xs)B3(xt) = ∞ � k=0 � θ(2) s θ(3) t (θ(1) s+t)−1 (s + t)k A1(s + t) � Pk(x) k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Expansion Formulas for Brenke Polynomials 13 Thus � i,j≥0 Ri(x)Sj(x)si i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tj j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � k=0 � A2(s)A3(t)θ(2) s θ(3) t (θ(1) s+t)−1 (s + t)k A1(s + t) � Pk(x) k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Equating the coefficients of Pk(x) in the above equation and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='8), we obtain (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) which finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ Next, as applications, we recover the generating function for the LC of three Appell polynomials and the explicit expression of the LC associated to three Brenke PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appell Polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Let {Pn}n≥0, {Rn}n≥0, and {Sn}n≥0, be three Appell-PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Then we have B1 = B2 = B3 = exp, and by applying Theo- rem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2, we obtain that the LC in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) are generated by A2(s)A3(t) A1(s + t) (s + t)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = ∞ � i,j=0 Lij(k) i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' sitj, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9) which agrees with Carlitz Formula [10, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Moreover, for Pn = Rn = Sn = Hn, where Hn are Hermite polynomials generated by e−t2e2xt = ∞ � n=0 Hn(x)tn n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='10) we have A1(t) = A2(t) = A3(t) = A(t) = e−t2, and then A(s)A(t) A(s + t) (s + t)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' = 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='e2st(s + t)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9) we deduce the standard linearization formula for Hermite PSs Hi(x)Hj(x) = min(i,j) � k=0 �i k ��j k � 2kk!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='Hi+j−2k(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='11) This formula is known as Feldheim formula [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Explicit Expression of the LC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For three Brenke PS satisfying the hypothesises of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2, the LC in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) are given by: Lij(k) = i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' i � n=0 j � m=0 b(2) n b(3) m b(1) n+m a(2) i−na(3) j−m�a(1) n+m−k, k = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , i + j, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='12) where 1/A1(t) = ∞ � n=0 �a(1) n tn, and �a(1) −n = 0, n = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Indeed, we have (s + t)k A1(s + t) = ∞ � n=k �a(1) n−k(s + t)n, then by using (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6), we get θ(2) s θ(3) t (θ(1) s+t)−1 � (s + t)k A1(s + t) � = ∞ � n=k �a(1) n−k n � m=0 b(2) n−mb(3) m b(1) n tmsn−m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 14 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane Thus, with sum manipulations and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7), one can easily verify that � i,j≥0 Lij(k) i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' sitj = 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' ∞ � n,m=0 � ∞ � i=n a(2) i−nsi � \uf8eb \uf8ed ∞ � j=m a(3) j−mtj \uf8f6 \uf8f8 b(2) n b(3) m b(1) n+m �a(1) n+m−k = 1 k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' � i,j≥0 � i � n=0 j � m=0 b(2) n b(3) m b(1) n+m a(2) i−na(3) j−m�a(1) n+m−k � sitj, which leads to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' We note that this result was first obtained in [11, Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3] by using a method based on the inversion formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Application to Generalized Gould-Hopper Polynomial Set The (d + 1)-fold symmetric generalized Gould-Hopper polynomials, {Q(d+1) n (·, a, µ)}n≥0, are generated by [7]: eatd+1 expµ(xt) = ∞ � n=0 Q(d+1) n (x, a, µ) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn, a ∈ C, µ ̸= −1 2, −3 2, −5 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) where a PS {Pn}n≥0 is said to be (d + 1)-fold symmetric, d = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' , if Pn � e 2iπ d+1 x � = e 2inπ d+1 Pn(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' These polynomials constitute a unification of many known families such as: Classical Hermite PS, Hn(x) = Q(2) n (2x, −1, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gould-Hopper PS, gm n (x, h) = Q(m) n (x, h, 0), (same notations as in [22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Generalized Hermite polynomials [30]: Hµ n(x) = Q(2) n (2x, −1, µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) The GGHPS are of Brenke type with transfer power series A(t) = exp(atd+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' They are the only (d + 1)-fold symmetric Dunkl-Appell d-orthogonal PS [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Next, we solve the connection and linearization problems associated to GGHPS and we treat the particular case of generalized Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Connection Problem Here, we state the connection formulas for two GGHPS when one or two of the parameters are different and we give an integral representation of these coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Moreover, the inversion formula, addition and convolution relations, and duplication formula are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The connection coefficients, Cn−i(d+1)(n), 0 ≤ i ≤ [ n d + 1], between two GGHPS, {Q(d+1) n (·, a, µ1)}n≥0 and {Q(d+1) n (·, b, µ2)}n≥0 are given Expansion Formulas for Brenke Polynomials 15 by Cn−i(d+1)(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − i(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' i � k=0 γµ1(n − k(d + 1)) γµ2(n − k(d + 1)) (−a)i−k (i − k)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' bk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' By means of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6), we have θ(tme−atd+1) = ∞ � n=0 (−a)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ1(n(d + 1) + m) γµ2(n(d + 1) + m)tn(d+1)+m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, by using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='1) and sum manipulation, we obtain ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = ebtd+1θ(tme−atd+1) = ∞ � i=0 1 i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' i � k=0 �i k �γµ1(k(d + 1) + m) γµ2(k(d + 1) + m)bi−k(−a)k ti(d+1)+m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Therefore, for n = i(d + 1) + m, the desired result holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ We note that for the particular case µ1 = µ2, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) is reduced to Cn−i(d+1)(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (b − a)i i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − i(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', 0 ≤ i ≤ � n d + 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For the connection coefficients obtained in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3, we have the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For µ2 > µ1 > −1 2, the connection coefficient given by (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3) has the following integral representation, Cn−i(d+1)(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='β−1(µ1 + 1 2, µ2 − µ1) i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − i(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' × � 1 −1 tn−i(d+1)|t|2µ1(b − atd+1)i (1 − t2)µ2−µ1 1 − t dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Using Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4 with f(x) = xn−k(d+1) and x = 1, we obtain γµ1(n − k(d + 1)) γµ2(n − k(d + 1)) = 1 β(µ1 + 1 2, µ2 − µ1) � 1 −1 tn−k(d+1)|t|2µ1 (1 − t2)µ2−µ1 1 − t dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Substituting the above equation in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3), we get: Cn−i(d+1)(n) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − i(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 1 β(µ1 + 1 2, µ2 − µ1)× � 1 −1 tn|t|2µ1 (1 − t2)µ2−µ1 1 − t � i � k=0 �i k � (−a)i−k( b td+1 )k � dt, from which the desired result follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ Next, we give some specific expansion relations associated to GGHPS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 16 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane Explicit and inversion formulas: The following explicit expression and inversion formula of {Q(d+1) n (·, a, µ}n≥0 can be easily derived from (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3): Q(d+1) n (x, a, µ) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ n d+1 ] � k=0 ak k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='γµ(n − (d + 1)k) xn−(d+1)k, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='4) and xn γµ(n) = [ n d+1 ] � k=0 (−a)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − (d + 1)k)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='Q(d+1) n−(d+1)k(x, a, µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='5) Addition and convolution relations: T µ y Q(d+1) n (x, a, µ) = n � k=0 n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='yn−k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='γµ(n − k)Q(d+1) k (x, a, µ), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='6) 2 n d+1 T µ y Q(d+1) n � 2 −1 d+1 x, a, µ � = n � k=0 �n k � Q(d+1) k (y, a, µ) Q(d+1) n−k (x, a, µ), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='7) where T µ y = expµ(yDµ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For µ = 0, this equation is reduced to the well-known Gould- Hopper convolution type relation [22] and for m = 2, h = −1, we recover the Runge formula for Hermite polynomials [29] Duplication formula: Q(d+1) n (αx, a, µ) = n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ n d+1 ] � k=0 αn−k(d+1)(1 − αd+1)kak (n − k(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Q(d+1) n−k(d+1)(x, a, µ), α ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Linearization Formula Taking into account the (d + 1)-fold symmetry property of the GGHPS, any LC Lij(k), in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='2) vanishes when k ̸= i + j − r(d + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, according to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='12), the corresponding LC is given by: Lij(i + j − r(d + 1)) = i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='j!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (i + j − r(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ i d+1 ] � n=0 [ j d+1 ] � m=0 an 1am 2 (−a3)r−m−n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (r − m − n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' × γµ3(i + j − (m + n)(d + 1)) γµ1(i − n(d + 1))γµ2(j − r(d + 1)), 0 ≤ r ≤ � i + j d + 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Expansion Formulas for Brenke Polynomials 17 We remark that there is no difficulty in proving the corresponding formula for the linearization of any arbitrary number of GGHPSs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' We have: N � s=1 Q(d+1) is (x, as, µs) = [ i1+···+iN d+1 ] � r=0 i1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' · · · iN!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (i1 + · · · + iN − r(d + 1))!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='× [ i1 d+1 ] � s1=0 · · [ iN d+1 ] � sN =0 as1 1 · · · asN N (−aN+1)r−s1−···−sN s1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' · · · sN!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (r − s1 − · · · − sN)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' × γµN+1(i1 + · · · + iN − (d + 1)(s1 + · · · + sN)) γµ1(i1 − (d + 1)s1) · · · γµN(iN − (d + 1)sN) × Q(d+1) i1+···+iN −r(d+1)(x, aN+1, µN+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Generalized Hermite Polynomials The generalized Hermite polynomials, {Hµ n}n≥0, are introduced by Szeg¨o [30], then investigated by Chihara in his PhD Thesis [15] and further studied by many other authors [11,28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' They are generated by: e−td+1 expµ(2xt) = ∞ � n=0 Hµ n(x) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' tn, µ ̸= −1 2, −3 2, −5 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='8) Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The following connection relation holds: �Hµ2 n (x) = [n/2] � k=0 (−1)k 4k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ2 − µ1)k �Hµ1 n−2k(x), µ2 > µ1 > −1 2, (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9) where { �Hµi n }n, i = 1, 2 are the normalized generalized Hermite PS given by �Hµi n (x) = γµi(n) n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ n 2 ]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Hµi n (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' From what has already been stated, the connection coefficients from {Hµ2 n }n to {Hµ1 n }n are generated by e−t2θ(tmet2) = ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn, where θ is the operator defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='25).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Making use of the θ-integral representation (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='26), intercalate 0 in the interval of integration, we get: ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = tme−t2 β(µ1 + 1 2, µ2 − µ1)× � 1 0 et2s2 sm+2µ1 (1 − s2)µ1−µ2 � 1 1 − s + (−1)m 1 + s � ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 18 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane It follows, for m even and after substituting u = s2, that ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = tme−t2 β(µ1 + 1 2, µ2 − µ1) � 1 0 eut2u m−1 2 +µ1(1 − u)µ2−µ1−1du = ∞ � n=0 (−1)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' β(µ1 + m+1 2 , µ2 − µ1 + n) β(µ1 + 1 2, µ2 − µ1) tm+2n, where the term by term integration is justified by the same argument as in the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' On the other hand, we have β(µ1 + 1 2 + k, µ2 − µ1 + n) β(µ1 + 1 2, µ2 − µ1) = Γ(µ1 + 1 2 + k)Γ(µ2 − µ1 + n)Γ(µ2 + 1 2) Γ(µ2 + n + k + 1 2)Γ(µ1 + 1 2)Γ(µ2 − µ1) = γµ1(2k) 22kk!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 22(k+n)(k + n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ2(2(k + n)) (µ2 − µ1)n = γµ1(m) γµ2(m + 2n) 4n([m/2] + n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [m/2]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ2 − µ1)n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Thus, by virtue of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='17) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='27), we obtain ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = ∞ � n=0 (−1)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ1(m)4n([ m 2 ] + n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ2(m + 2n)[ m 2 ]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ2 − µ1)ntm+2n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' For m odd, similar computations lead to ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = ∞ � n=0 (−1)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ1(m) γµ2(m + 2n) 4n([ m 2 ] + n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ m 2 ]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ2 − µ1)ntm+2n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Therefore, for m = 0, 1, 2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', we have: ∞ � n=m m!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Cm(n)tn = ∞ � n=0 (−1)n n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ1(m) γµ2(m + 2n) 4n([ m 2 ] + n)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ m 2 ]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (µ2 − µ1)ntm+2n, Thus, for k = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', [n 2 ], we get Cn−2k(n) = (−1)k k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' n!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' (n − 2k)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4k[ n 2 ]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [ n 2 − k]!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' γµ1(n − 2k) γµ2(n) (µ2 − µ1)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' □ We note that the connection coefficients in (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='9) alternate in sign and that this relation was already derived in [14], where the authors used a linear computer algebra approach based on the Zeilberger’s algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' References [1] Abd-Elhameed, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Badah, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : New approaches to the general linearization problem of Jacobi polynomials based on moments and connection formulas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Mathematics 9, 1–28 (2021) Expansion Formulas for Brenke Polynomials 19 [2] Area,, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Godoy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Rodal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Ronveaux, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Zarzo A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Bivariate Krawtchouk polynomials: Inversion and connection problems with the NAVIMA algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 284, 50–57 (2015) [3] Askey, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Orthogonal Polynomials and Special Functions, CBMS-NSF Re- gional Conference Series in Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' SIAM, Philadelphia, Pynn- sylvania (1975) [4] Askey, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Gasper, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Jacobi polynomial expansions of Jacobi polynomials with non-negative coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Camb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Phil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 70, 243–255 (1971) [5] Ben Cheikh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Some results on quasi-monomiality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 141, 63–76 (2003) [6] Ben Cheikh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Chaggara, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Connection coefficients between Boas–Buck polynomial set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 319, 665–689 (2005) [7] Ben Cheikh, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Gaied, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Dunkl-Appell d-orthogonal polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Integral Transforms Spec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 18, 581–597 (2007) [8] Ben Romdhane, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': A general theorem on inversion problems for polynomial sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 13, 2783–2793 (2016) [9] Brenke, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': On generating functions of polynomial systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Monthly 52, 297–301 (1945) [10] Carlitz, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Products of Appell polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Collect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 112, 133–138 (1963) [11] Chaggara, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Operational rules and a generalized Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 332, 11–21 (2007) [12] Chaggara, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Quasi monomialty and linearization coefficients for Sheffer poly- nomial sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Difference Equations, Special Functions, And Orthogonal Polyno- mials pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 90–99 (2007) [13] Chaggara, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Mabrouk, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Linearization coefficients for some basic hyperge- ometric polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Mathematics Volume 2022, 12 pages [14] Chaggara, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Koepf, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': On linearization and connection coefficients for gen- eralized Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 236, 65–73 (2011) [15] Chihara, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Generalized Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' thesis, Purdue (1955) [16] Chihara, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Orthogonal polynomials with Brenke type generating functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Duke Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 35, 505–517 (1968) [17] Chihara, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': An Introduction to Orthogonal Polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gordon and Breach, New York, London, Paris (1978) [18] Dehesa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Martinez-Finkelshtein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', S´anchez-Ruiz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Quantum information entropies and orthogonal polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 133, 23–46 (2001) [19] Di Bucchianico, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Loeb, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : Operator expansion in the derivative and mul- tiplication by x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Integral Transforms Spec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Funct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 4, 49–68 (1996) [20] Dunkl, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Integral kernels with reflection group invariance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Canad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 43, 1213–1227 (1991) [21] Gasper, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Linearization of the product of Jacobi polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Canad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 22, 171–175 (1970) [22] Gould, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Hopper, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Operational formulas connected with two generaliza- tions of Hermite polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Duke Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 29, 51–63 (1962) 20 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Chaggara, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Gahami and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ben Romdhane [23] Koornwinder, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Compact quantum groups and q-special functions 311, 46– 128 (1994) [24] Maroni, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Da Rocha, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Connection coefficients for orthogonal polynomials: symbolic computations, verification, and demonstrations in the Mathematica language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Algor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 63, 507–520 (2013) [25] Asai, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Kubo, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Kuo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : The Brenke type generating functions and ex- plicit forms of MRM-triples by means of q-hypergeometric series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Inf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Dimens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Quantum Probab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Related Topics, 16 27 pages (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [26] Opdam, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Dunkl operators, Bessel functions and the discriminant of a finite Coxeter group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Compos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', 85, 333–373 (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' [27] Rainville, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Special Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' The Macmillan Company, New York (1960) [28] Rosenblum, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Generalized Hermite polynomials and the Bose-like oscillator calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Oper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Theory Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 73, 369–396 (1994) [29] Runge, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': ¨Uber eine besondere art von intergralgleichungen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 75, 130–132 (1914) [30] Szeg¨o, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : Orthogonal polynomials, 4rd edn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Colloq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 23, Amer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Soc, New York (1975) [31] Szwarc, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Convolution structures associated with orthogonal polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 170, 158–170 (1992) [32] Tcheutia,, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Foupouagnigni, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Koepf, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Sadjang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : Coefficients of multiplication formulas for classical orthogonal polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Ramanujan J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 1–35 (2015) [33] Varma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Sezgin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', ´I¸c¨oz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=': Generalization of Szasz operators involving Brenke type polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 64, 121–127 (2012) [34] Wani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Mursaleen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=', Nisar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' : Certain approximation properties of Brenke polynomials using Jakimovski-Leviatan operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Inequal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' 64, 1–16 (2021) Hamza Chaggara Mathematics Department, College of Science, King Khalid University, Abha, King- dom of Saudi Arabia/D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sci- ences et de la Technologie, Sousse University, Tunisia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' e-mail: hshaggara@kku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='sa / hamza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='chaggara@ipeim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='rnu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='tn Abdelhamid Gahami D´epartement de Math´ematiques, Institut Pr´eparatoire aux ´Etudes d’Ing´enieur, Sfax University, Tunisia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' e-mail: aelgahami@yahoo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='fr Neila Ben Romdhane D´epartement de Math´ematiques, ´Ecole Sup´erieure des Sciences et de la Technologie, Sousse University, Tunisia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content=' e-mail: neila.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='benromdhane@ipeim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='rnu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} +page_content='tn' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/D9FRT4oBgHgl3EQfxziA/content/2301.13643v1.pdf'} diff --git a/EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss b/EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..a9aebfca59cf7572b5e77529c0ab22dfeef14abe --- /dev/null +++ b/EtE1T4oBgHgl3EQfqgU3/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3487c95f8117da8aef44a1774522e04d1b39f332aff374079e81313a7c5479fe +size 3932205 diff --git a/GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss b/GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..dbf6cb3d4fbbd7e3632f4214e12ae12d1aadde1e --- /dev/null +++ b/GtAzT4oBgHgl3EQfHftK/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bb3fe2c7b1bf20c0411ddc34bb446d8c68162155d8f864e9fca47c422f547b3b +size 5242925 diff --git a/HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss b/HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..47542b052830a3a0f336f54a789e284d22823208 --- /dev/null +++ b/HNFAT4oBgHgl3EQfth7Z/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f996983dda92caffd0da403f59952dd1ed04ec4c28422f17acdc38d1c2c978c7 +size 2228269 diff --git a/INAzT4oBgHgl3EQfjf2_/content/tmp_files/2301.01518v1.pdf.txt b/INAzT4oBgHgl3EQfjf2_/content/tmp_files/2301.01518v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..8eb7d659d58bc84393e409112a44fa31bd6fc32e --- /dev/null +++ b/INAzT4oBgHgl3EQfjf2_/content/tmp_files/2301.01518v1.pdf.txt @@ -0,0 +1,1071 @@ +Organised Firestorm as strategy for business +cyber-attacks +Andrea Russo +Department of Physics and Astronomy, University of Catania, Italy +Email: andrea.russo@phd.unict.it +Abstract—Having a good reputation is paramount for most or- +ganisations and companies. In fact, having an optimal corporate +image allows them to have better transaction relationships with +various customers and partners. However, such reputation is hard +to build and easy to destroy for all kind of business commercial +activities (B2C, B2B, B2B2C, B2G). A misunderstanding during +the communication process to the customers, or just a bad +communication strategy, can lead to a disaster for the entire +company. This is emphasised by the reaction of millions of +people on social networks, which can be very detrimental for +the corporate image if they react negatively to a certain event. +This is called a firestorm. +In this paper, I propose a well-organised strategy for firestorm +attacks on organisations, also showing how an adversary can +leverage them to obtain private information on the attacked +firm. Standard business security procedures are not designed to +operate against multi-domain attacks; therefore, I will show how +it is possible to bypass the classic and advised security procedures +by operating different kinds of attack. I also propose a different +firestorm attack, targeting a specific business company network +in an efficient way. Finally, I present defensive procedures to +reduce the negative effect of firestorms on a company. +Index +Terms—Firestorm, +Cyber-attack, +Business +Defence, +Socio-dynamics, Stress Test, Network Science, Cyberpunk 2077. +I. INTRODUCTION +Before the advent of social medias, brand crises were largely +caused by journalists’ contributions. Nowadays, a firestorm is +a cluster of consumers’ digital word of mouth that highlights +some communication error, or some terrible mistake made +by a company [15]. The Cambridge dictionary1 defines the +firestorm as “a sudden, and sometimes violent reaction” and +the shitstorm as “a wildly chaotic and unmanageable situation, +controversy, or sequence of events”. In this paper, I will use +both these terms interchangeably. +During the last years, many firestorms took place on the +Internet [19], [27], [31], mainly due to the increase of the +number of users on social networks. In some cases, firestorms +have been formally studied to better understand this phe- +nomenon [15], [28], [31]. In 2007, several researchers debated +over firestorms, and one of the main outcomes is that “a +natural science model of the research process is suitable for +studying the social world but a central issue remaining of +whether the social world can, and should be, studied according +to the same principles, procedures, and philosophy as the +natural sciences” +[1]. This is relevant because today I are +actually able to study and evaluate social dynamics by using +1https://dictionary.cambridge.org +the massive amount of data coming from the digital world, +with particular emphasis on social networks [32]. +Firestorms are not made of a single event with a standard +behaviour, instead they are caused by non-linear dynamics +leading to complex behaviours. Due to this, companies must +have appropriate procedures to respond to various crisis situa- +tions. Lehtonen’s theory [23] shows that a firestorm develops +in five stages: (1) latent stage, where weak signals of the +upcoming crisis are received; (2) triggering event, where the +subject becomes the target of news and social media attention; +(3) the subject is in the top-news and the media attention +spikes; (4) the media attention calms down to the level of +general philosophical and ethical discussion; and (5) there +are only minor media hits and attention is guided to other +issues [28]. +As firestorms begin when there is a service failure, a +social failure or when a company fails to communicate prop- +erly [15], this kind of errors can be reduced by following +appropriate procedures. However, most of the existing quality +and security procedures, such as the ones suggested by ISO +9001:2015 [17] and ISO/IEC 27002:2022 [18] are not ade- +quate for a multi-domain cyber and social attack. Because, +regard to the 27002:2022, social attacks are outside the scope, +while, 9001:2015, even if it focuses on better business process +quality, thus, less firestorm risk from the public, it does not +mitigate the firestorm from an attacker. +Hence, in this paper I theorise that it is possible for an +attacker to intentionally cause a firestorm attack to undermine +the reputation of a company, with the side-effect of advan- +taging the competitors. I argue that self-organised Firestorm +attacks require a high number of bots that are already active +on social medias: in this case, bots start the firestorm on the +target company, spreading fake news (or magnifying a certain +event, e.g., a mistake made by the company in the past) that +will cause a high volume of real people to react negatively +and continue the social attack, unknowingly on behalf of the +adversary. +Additionally, I argue that Open Source Intelligence (OS- +INT) could allow an adversary to identify weak spots in the +organization, namely people who most likely cannot react +properly or defend themselves from the firestorm, hence not +being able to timely mitigate its impact. Many workers have a +LinkedIn, Facebook, or Twitter account: moving the firestorm +on the social media accounts of people who work for the +target company can lead to an extremely stressful situation for +arXiv:2301.01518v1 [cs.CY] 4 Jan 2023 + +workers. This could be even worse for people who do not often +deal with public relations, and could cause confusion, panic +and distress. In fact, when a firestorm arises, even people who +work on communication processes and managers can panic, +and the fear of losing customers and partners can be very +detrimental for any company. +When people working in the target firm are in this altered +status, I argue it is possible to elaborate a social engineering +strategy to capture protected information: in this case, not only +firestorms serve the purpose to undermine the corporate image, +but they are also used as a diversion for a social engineering +attack. In fact, while most important organisations adhere +to best-practices listed in security standards like ISO/IEC +27002:2022 [18], during a social attack like firestorms, some +best-practices and procedures may be distorted or bypassed, +both intentionally or by mistake, due to the pressure applied +to people who are in charge of complying to such procedures +[14]. +Contributions. The paper makes these contributions: +1) I explain how to make an automated and organized +firestorm attack, with only a few manual operations such +as the choice of a topic and of a hashtag; +2) I introduce a taxonomy of possible actions that the +attacker could perform while doing the firestorm; +3) I illustrate how the author of a firestorm can evade +detection for their attack by targeting single workers +instead of the company profiles, while increasing the +damage done to the firm. +4) I show possible long and short term procedures that +a company can implement to mitigate the effect of +firestorms attacks. +II. CYBER-ATTACK PLANING PRELUDE +In this section, I illustrate a novel strategy to artificially +cause a firestorm, leveraging a botnet to start agitating real +people against a target company. Due to the large number of +posts that bots can create within seconds, they can be used +to amplify any idea on social networks, influencing political +affairs [3] and business company value [33]. For example, +due to a cyber-attack on a Twitter newspaper profile, such +newspaper shared a fake news about President Obama being +injured by a bomb in the White House, causing a flash-crash +in Wall Street and the stop all of economic transactions for +some minutes. This led to a loss of about 121 billion dollars +for S&P 500 and its related companies [11]. +I structure the attack plan in six stages: +1) Finding an event/topic to build the firestorm attack +on. This can be a past event or an error that the firm +has committed in the past, which will be used as a basis +for the upcoming attack. I define this event as the target +topic. +2) Using bots to create or amplify the latent state. By +leveraging a botnet, an adversary can create a high num- +ber of posts on social media, allowing the target topic +to reach more people and giving them the opportunity +to react negatively. This can eventually lead to a state +where real people start to autonomously talk about the +subject and begin to spread information about the target +topic on their own. To facilitate this, the attacker can +reuse an old trending hashtag or create a new one: the +hashtag is the keyword to incite social action due to the +information symbolised by the word itself. +3) Letting the topic spread among people. The ideal +situation for the attacker is that real people begin posting +about the target topic, after learning about it from the +botnet’s posts. This will bring more attention to the +topic, possibly making it a trending one. For example, +Twitter allows users to check what topics and hashtags +are currently popular. If this happens, there will be +moment in which there are enough people posting about +the target topic, so that the firestorm can sustain itself for +days, without any other post coming from the attacker’s +botnet. I call this moment the fire point.2 Instead, if real +people did not react negatively to the topic, or the topic +did not reach enough people to allow the firestorm to +reach the fire point, the discussion on the topic will slow +down and will eventually end. In this case, I say that +the firestorm is extinguished. However, the attacker can +change the target topic and restart from Stage 1. +4) Identifying human targets. Managers (e.g., Chief Tech- +nical Officers, Chief Executive Officers) are the decision +makers of a company. The attacker might want to keep +a list of these people in order to use these names when +the attack will move over from the company’s social +network profiles to the employees’ ones. Identifying the +people who are most proud to work for the attacked +company can also be helpful in exerting more pressure +on the company (since they have more to do with the +value of the company). +5) Focusing on workers. During the peak activity of the +firestorm, those same bots that built the latent state will +move their focus on the public social media profiles +owned by employees of the attacked firm. These pro- +files were identified in the previous step of the attack. +This may cause the attention of the firestorm to shift +towards the employees, also causing them to experience +discomfort. Because the brand is usually at the center +of the firestorm, focusing people will have a stronger +impact on them, and it can disrupt internal processes. +6) Performing the cyber attack. Because people will put +less attention in following internal procedures, many +safety best-practices adopted by the company may not be +followed properly, or may even be ignored. The attacker +can exploit this behaviour to their own advantage. +In order to shift the focus from the company to the worker, +it is necessary to optimise the timescale and timing of the +transition, as it is not linear for people to attack the worker, +but it can happen more easily if the negative event is of high +negative impact and value. Shifting the attack on employees +2In chemistry, the fire point is the lowest temperature at which a certain +fuel will continue to burn for a minimum of five seconds, when ignited. + +has another side-effect, which is beneficial to the attacker: +the organisations that are responsible for the public cyber +security in every country cannot see the Firestorm attack +on the company page, because the Firestorm is focused on +workers only Such organisations will hardly be able to detect +all comments and posts focused on workers, allowing the +attacker to create a smoky form of the attack, which can +bypasses conventional security measures, procedures and +strategies. Since they have to focus primarily on the company +under attack, therefore, possibly not give so much attention +to analysing every single interaction against all the operators +of the attacked company. +III. BUSINESS SOCIAL MOOD-DISEASE AND NETWORK +STRATEGY +The Cambridge Analytica case highlighted the role and the +importance of social media for the majority of the population +and organisations. A document produced by the American +Ministry of Justice, to examine the possible foreign influence +on US, showed how there actually exist organisations (such as +the IRA - Internet Research Agency) [36] that aim to influence +individuals, public and private organisations [29]. +A great part of what is needed to successfully influence +people lies to understand the initial conditions of the system, +i.e. in the correct profiling of such people through data +obtained on social networks. People who are more sensitive +to certain issues, and those key people who can influence the +most the community where they live and work are the main +focused people for a social attack, because they have a central +role (hubs) in the network. +Profiling consists in obtaining (through a process of data +collection and subsequent processing) an absolute or almost +absolute understanding of a group of individuals or a single +person, comprehending their habits and preferences [13]. The +information obtained concerns political, musical and social in- +terests, including the identification of their network of friends, +colleagues, and much more. This information allow a much +easier conveying of any content, as it is possible to understand +who is most susceptible and interested on various topics, +affecting their weaknesses, fears and interests. Furthermore, +it is possible to infer who could possibly propagate a certain +content through their network, exponentially increasing the +chance of success if the subject in question is a person with +an important or main role. +Cambridge Analytica used the OCEAN model, related to +personality traits, to understand preferences of many people +in the US during the national election on 2016 [36]. The +OCEAN model allows to send specific messages and contents +to people who are sensible to a certain topic. This method +is very different from the classic and standard mass commu- +nication, because it is possible to send the right content to +the right person. Unfortunately, the CA scandal was defined +as classic political influence, the old-fashioned way, thus +including prostitution, favouritism, etc. In reality, the scandal +found “a new type of weapon” as Brittany Kaiser (former CA +business development director) said during her question time +(on Commons culture committee in 2018) to describe the work +done from CA, but also to categorize AI as a real soft-power +weapon [13]. +However, understanding hot topics for workers is not +enough – in order to modify their mood and obtain a good +social attack, a subject topic needs to be found as well. +On social networks, during firestorms , people are usually +triggered by three kinds of errors [15]: +1) Social failure +2) Communication failure +3) Product or service failure +Although they may seem similar, different types of events +can lead to different types of dynamics and reactions. In the +case of product or service failures, for example, performance- +related crises raise doubts about the brand’s ability to deliver +basic functional performance [9]. Another research has also +identified not only short-term effects to a brand after a +firestorm, but also measured long-term ones, at least two years +after the latest firestorm [15]. +I hereby give an example for each of the aforementioned +triggering factors. +1) Social failure. The firm might be an accomplice of some +accident or crime, like Nike with children shoes [10], +[30] or the ING-DiBa case in 2012 [31]. +2) Communication failure. The firm might fail to commu- +nicate properly, for example making negative comments +regarding a certain community or movement [27]. +3) Product or service failure. The firm might distribute +a product that harms consumers, for example a vaccine +that can kill people [19]. +These failures and the firestorm stemming from them might +cause affected employees to experience discomfort and panic, +because coworkers, friends and other people in their net- +work might see affected employees as the root-cause of the +Firestorm. +The social-cyber attack also provokes unlikely passive con- +sequences for companies: +1) The value of the company on the financial market could +rapidly decrease; [11] +2) People who worked in the company during the firestorm +might be subject to discrimination in future, especially if +the firestorm was caused by a (supposedly) unacceptable +mistake that could have been avoided [26], [38]. +3) As the people, also the offended brand could carry a +long-term stigma that would motivate other companies +to make job offers to the personnel of the attacked firm. +This could put it on an even greater disadvantage, as +workers would be incentivized to leave the attacked +company and accept the new offer. +The network, as well as the importance and scope of the +news, can thoughtfully influence the reaction and dynamics +of the company. The network, as well as the importance and +scope of the news, can thoughtfully influence the reaction and +dynamics of the company. For example, when a company’s + +workers receive an high importance news, they may behave +helplessly in relation to the importance of the news; feeling +relieved of responsibility, since the event is bigger than their +actions, they tend to pass much of the responsibility on to the +company’s managers. +Indeed, in times of disorder or chaos, Entropy increases with +decreasing order, and emergency increases with increasing +order: this happens because people within the organisation +understood the emergency, and the organisation improve them- +self to respond to it [39]. +When many workers in the company are panicking, the +organisation’s CCO (Chief Communication Officer) will elab- +orate and react to Firestorm on company pages, however, this +cannot stop the social attack on the individual profiles of the +employees. Hence, even people who are in charge of running +communication processes and managers can panic, as the more +is the duration of the firestorm, the higher is the chance of +losing clients and reputation. This is a terrible situation for +any company, especially after many years of work. However, +managers are considered "critical workers" on the organisation +chart, hence, they cannot be influenced by social manipulations +and social diseases, because of the responsibilities they have in +the company. While during the last century such organization +charts had the form of a pyramid, usually with the CEO on the +top, nowadays the AGILE model allows companies to organise +their personnel in different ways within their organization +charts. However, the legal and personal responsibility for every +error or critical issue will be always be of the top manager +of that area – for example, the CISO (Chief Information +Security Officer) is usually responsible for the cyber security. +A network side strategy can hard-influence workers close to +managers and directors, contaminating directly the mood of +the team, including the manager. In a more specific way, the +attacker the hub from the company network, defusing also +other workers from the company.Once the social-disease is +already widespread on the company, and many people are +stressed about the firestorm, the cyber attack can begin. +IV. ASSESSING THE ATTACK SURFACE +In this section, I introduce the possible actions that the +adversary (or the real people that contribute to firestorm) +can perform to further disrupt the target company’s business +processes, to sink its corporate image, or to get classified +information. To do so, I introduce a novel classification of +these actions and analyze their impact on the fundamental +properties of information security, that is, Confidentiality, +Integrity and Availability [34]. +I show these actions can be divided in three categories: +1) Controlling Large Scale Entities, that is, thousands +or even millions of different actors performing several +concurrent actions against a firm. These actors can act +both remotely and physically, and can be both robots +and humans. +2) Leveraging Internal People, namely, exploiting mis- +takes performed by employees (e.g., because they are +stressed due to the firestorm), or having an insider threat +who can extract classified information. +3) Asking for Ransoms, that is, the adversary may want +to ask for a payment to stop the firestorm. This would +cause the bots to be shutdown, or even to defend the +company on social medias. +I hereby analyse the different actions within each category +and their impact. This analysis is summarised in Table I. +A. Controlling Large Scale Entities +a) Denial of Service (DoS) Attacks: The adversary might +want to harm the firm’s reputation by negating the availability +of the services it offers. To this avail, the attacker can leverage +botnets to send a very high number of requests per second to +the target service, overwhelming the server and resulting in the +service going down. If possible, the attacker could even reuse +the botnet used to create the latent state, and rearm it with a +DoS script. Alternatively, if the adversary is not a single entity +but a large group of organised people, a DoS attack can be +performed with simple scripts, without leveraging any botnet, +as the large number of adversaries could be able to generate the +traffic required to overload the server. In this case, however, the +adversaries would have to carefully time their attack, and they +might want to hide their location, for example by using a VPN. +Finally, the adversary could encourage real people to overload +the target firm’s servers, as they could co-ordinate the attack +by using the bot profiles used for the hashtag propaganda. +b) Physical Actions: Business processes can be also +interrupted or slowed by legal, yet harmful, physical actions. +One example is a demonstration around the firm’s premises: +employees might not get to their workplace in time because +people manifesting outside the building are blocking or slow- +ing access to the premises, or they are creating more traffic +than usual on the way to the building. Another example is +people calling the organisation’s call centers with the only +goal of protesting. +B. Leveraging Internal People +a) Human Error: Even though it is widely known that +human error is one of the most prominent causes of security +incidents [16], [43], most companies still do not adequately +invest in training for their personnel, resulting in data breaches +or other security related events [22]. This means that, if the +attacker wants to obtain an initial foothold on the target +organization’s systems, they might be able to do so without +needing a firestorm attack, depending on the employees’ abil- +ity of recognizing phishing emails or scam websites. However, +workers who are experiencing firestorm, be it on the company +they are working with or on their own profile, will be more +inclined to break internal policies, hence committing mistakes, +due to the perceived crisis [2]. +b) Offering Help: During the firestorm’s peak activity, +the adversary itself contacts the attacked firm, pretending to be +a professional (e.g, a consultant) who can help in mitigating +the effects of the firestorm, for example as a Social Media +Manager who has dealt with Firestorms before. This can + +happen via emails, social networks or through the corporate’s +website, for example if the firm has some job openings and the +adversary pretends to be a candidate. For smaller enterprises, +the adversary may even show up in person to the attacked +company’s premises. If the attacker manages to get hired, they +might get access to classified information. I argue the attacker +does not want to tamper with documents or attack the firm’s +infrastructure while being an employee themselves. +c) Insider Threats: Instead of joining the firm them- +selves, the adversary might establish a contact with employees +who are still in the attacked company but are not showing +support on social media, or even manifested dissatisfaction +towards the company. The attacker might want to try to +persuade them in sharing confidential information, making +them insider threats [25] – if they have success, not only they +acquire classified information, but if the stolen content is also +compromising for the firm, it could be published online to +damage the firm’s reputation even more. +C. Asking for Ransoms +a) Extortion to Stop the Attack: The adversary contacts +the attacked firm and proves the botnet that is performing the +firestorm is in their control. They then ask for an arbitrary +amount of money in Bitcoins to shutdown the bots, stopping +a (hopefully) substantial part of the attack. In fact, if the +firestorm already managed to incite many people in joining the +social attack, the shutdown of the botnet might not stop or slow +down the firestorm. If the adversary plans to attack multiple +firms with their firestorms, they to avoid situations like this, +because the odds of a victim paying a ransom is proportional +to the reliability of the attacker in stopping the attack once +they receive the money. In other words, the attacker must be +considered “trusted” in stopping the attack if the ransom is +paid, so victims are more incentivized to pay [4]. +b) Defence as a Service: The adversary contacts the +attacked firm, but instead of showing they are in charge of +running the attack and asking money to stop it, they try +to sell a fire(storm)fighter service to the victim, supposedly +consisting on bots defending the reputation of the firm: this +is basically a reversed firestorm, in which those same bots +that built the latent state now defend the company: to avoid +drawing excessive attention, the attacker might slowly change +the proportion of attacking bots versus defending ones, until +they are all defending the company. +V. CASE STUDY: CD PROJEKT RED +On December 10, 2020, CD PROJEKT RED released a long +awaited game called Cyberpunk 2077. This game was very +popular even before its release and it generated continuous +social hype from the video game community throughout its +development, also winning the “Best Game Awaited” from +Golden Joystick Awards for two consecutive years. [42] As +shown on Figure 1 and Figure 2, hype for the game substan- +tially increased during the 10 days before the release of the +game, reaching its apex on December 10, when the hashtag +#Cyberpunk2077 was tweeted 193,900 times on Twitter, +TABLE I +SOCIAL ATTACK SURFACE ASSESSMENT +Category +Action +Impacts +Confid. +Integ. +Avail. +Rep. +Large Scale +DoS Attack +No +No +Yes +Yes +Phys. Actions +No +No +Yes +Yes +Internal People +Human Error +Yes +Yes +Yes +Yes +Help Offer +Yes +No +No +No +Insider Threat +Yes +No +No +Yes +Ransoms +Extortion +No +No +No +No +Defence Service +No +No +No +No +Confid.: The action can affect the Confidentiality property. | Integ.: The +action can affect the Integrity property. | Avail.: The action can affect the +Availability property. | Rep.: The action can negatively affect the reputation +of the company. +from users of 53 different nationalities. During this time span, +many other hashtags regarding the game were very popular, +for example #Cyberpunk2077Hype was retweeted 10,000 +times [41]. +However, a few days after the release , the Cyberpunk 2077 +topic arise again, this time associated with queries related to +patches and refunds. In fact, the game was released too early +and many bugs were present: due to this, several people had +asked a refund to CD PROJEKT RED, often also writing +a bad review for the game on online stores. This created a +"information-disease" within the company, just like the one +described in Section III: in this case, CD PROJEKT RED’s +employees became stressed and felt pressure related to the +quality of Cyberpunk 2077, in which they had invested more +than two years of hard work. [42] +In early February 2021, only 60 days after the game’s +release, CD PROJECT RED was hit by a ransomware attack +and attackers were able to extract the source code of several +games, including administrative files [8]. The attackers then +threatened the company of leaking or selling the stolen code +and files, unless the firm paid a large amount of money to the +cyber-criminals. In the end, CD PROJECT RED refused to +negotiate with the attackers, stating on a press release that they +would “not give in to demands or negotiate with the actor”, +also confirming that no personal information was obtained in +the attack and that they were working with law enforcement to +track down the attackers [7], [35]. Later on, security analysts +found the stolen source code while being auctioned on the dark +web for a minimum price of 1 million USD. [40] The auction +was closed after the attackers stated they had received an offer +that satisfied them [40] Within a week of these auctions, the +code was shared online via social media, and CD PROJECT +RED began using DMCA take down notices to remove posts +containing their code [24]. +The social hype that CD PROJEKT RED generated for +Cyberpunk 2077, was used by hackers to threaten the company +in order to extorting money, but also, had a side effect, +i.e. damaging the company’s reputation, that can bring to +undermine the sales of other long awaited games. +In Table II I show the results of the sentiment analy- + +sis, obtained from tweets and comments for the hashtag +#CDprojectRED. Data collected from Twitter respects the +timeline of Cyberpunk 2077’s release and its development; +data shown in the table can be organised in three categories: +before release (October and November), during release (De- +cember and January) and after the release of Cyberpunk 2077 +(February). +It is possible to observe that in October and November the +sentiment remained neutral-positive with a few oscillations. In +December, when the game was released, I can observe a small +increase in the negative sentiment due to the high number of +bugs present in the game, however, this increment is quite +negligible. In January, when a greater number of players were +playing the game, the negative sentiment became stronger than +the positive one, causing not only a negative compound (- +0.111), but also a neutral-negative sentiment for the game and +for the developers. Finally, on February the sentiment returned +neutral overall, however, the presence of negative sentiment is +still stronger compered to the one in October and November. +These data show how much pressure the CD PROJEKT +RED company had to experience during the release of the +game. Additionally, in Figure 3, I show the financial value +of the company during the whole game release timeline, also +marking the two critical events that occurred: the yellow line +indicates the release of the game, while the red line indicates +the ransomware attack. I can see that, after the release of the +game, the financial value of the company suffered a sudden +drop, that was likely conditioned by customers losing trust in +the company due to the presence of many bugs in the game, +bad reviews and critics. I can see that the company regains +more than half the value lost during the next two months, +however, the ransomware attack causes another drop in the +financial value of the company due to customers losing trust +in the company again, this time from a security perspective. +TABLE II +VADER SENTIMENT ON #CYBERPUNK2077 FROM TWITTER +Months +Negative +Neutral +Positive +Compound +October +0,085 +0,757 +0,150 +0,163 +November +0,079 +0,766 +0,149 +0,163 +December +0,087 +0,750 +0,161 +0,153 +January +0,143 +0,758 +0,093 +-0,111 +February +0,104 +0,745 +0,145 +0,120 +VI. BUSINESS DEFENCE STRATEGY +To avoid dangerous events for companies, human factor is +a crucial element [37], however it is also possible to create +specific defence strategies. Failures introduced in Section III, +i.e. social failures, communication failures and product or +service failures can be analysed to prevent incidents. To the +most of us, the news that a particular piece of information (e.g. +a meme, a hashtag) went “viral”, reaching millions of nodes +in a short period of time may seem purely random and hence +unpredictable, but Kolli et al. [21] discovered that, at least 20% +of the times, the cascade volume changes in a manner that +appears to be random, and in the remaining 80% it is possible +Fig. 1. Interest Score showing social hype for the release of Cyberpunk 2077 +Fig. 2. Queries showing social hype for the release of Cyberpunk 2077 +to predict the cascade’s future volume. Hence, it is possible +to create short-term strategies to detect firestorm attacks while +they are still in the early stages, i.e. while the latent state is +being built. However, it is also possible to create long-term +defence strategies with a proactive governance. A possible +proactive strategy for the long-term could be as follows: +1) Organise internal company procedures to help employ- +ees protect themselves against various attacks on social +media (like Linkedin); +2) Organise procedures outside the company, such as con- +Fig. 3. Financial value of CD PROJEKT RED and critical events + +120 +100 + score +80 +Internet search s +60 +40 +20 +0 +01/11/2020 +03/11/2020 +/11/2020 +07/11/2020 +09/11/2020 +11/11/2020 +13/11/2020 +15/11/2020 +17/11/2020 +/2020 +21/11/2020 +23/11/2020 +25/11/2020 +27/11/2020 +29/11/2020 +01/12/2020 +/12/2020 +05/12/2020 +07/12/2020 +09/12/2020 +19/11/ +03/20 +18 +Query search score +16 +14 +12 +10 +8 +6 +4 +2 +0 +date + date500 +450 +400 +350 +Financial value +300 +250 +200 +150 +100 +50 +Development +Game release +Ransomwere attack +0 +01.10.2020 +07.10.2020 +13.10.2020 +19.10.2020 +23.10.2020 +29.10.2020 +04.11.2020 +10.11.2020 +17.11.2020 +23.11.2020 +27.11.2020 +03.12.2020 +09.12.2020 +15.12.2020 +21.12.2020 +29.12.2020 +07.01.2021 +13.01.2021 +19.01.2021 +25.01.2021 +29.01.2021 +04.02.2021 +10.02.2021 +16.02.2021 +22.02.2021 +26.02.2021 +04.03.2021 +10.03.2021 +16.03.2021 +22.03.2021 +26.03.2021tacting allied/partner companies for help with the various +attacks on social media; +3) Create in advance supporting bots that will defend the +company automatically; +4) Create an international database of accounts that have +made firestorm. The database, accessible to all organi- +sations, both public and private, will help to understand +whether the type of firestorm taking place is real or +artificially created. [12] +These three possible actions can be highlighted by the +mass media, which will publicly show that the firestorm is +being fought because other people or organisations began +defending the attacked company. Hence, these actions allow +the firestorms to calm down, and eventually to be extinguished, +faster than simply doing nothing. [15] If a company has done +something enormously wrong in the past, it is possible that +every time the same company does something wrong, there +is a chance that another firestorm can restart, either for the +recent event or also for the past one. In fact, the firestorm can +come back with an interval of about 2 years [15]. +In case of social failures, there is also an additional side- +effect that must be mitigated, that is, the firestorm naturally +expands to the employees without the manipulation of the +adversary. Example defence strategies against this side-effect +could be implemented as follows: +1) Let people from outside and inside the company on +social network, dialogue about that topic (such as the +case of carnivores vs vegetarians at ING-DiBa [31]). +This strategy can increases the number of followers; +2) Blame an entity that is external to the company as a +scapegoat, so the Firestorm can move from the company +to the designed entity. Even if it is not very moral, it is +something that usually works; +3) Depending on the strength, length, and breadth of the +attack, it is possible to make strategy about possible +reaction for company. +a) Social failure: If the firestorm is linked to a partner +company, or only a certain sector of the company +is under attack, immediately distance yourself from +them. +b) Communication failure: The goal here is to safe- +guard the company’s reputation and authority. In +this case, try to detach yourself immediately from +the communication error, and continue with the +company’s reputations strategy, making it appear +that it was just an accident on the road. Further- +more, apologising for the event never hurts. +c) Product or service failure: Instantly block the pro- +duction of the affected product or the provision +of the service. Organise a commission that can +evaluate the quality of product/service. Even if it is +complicated given the amount of partners, quality +standards and corporate continuity, this action, if +done in time, creates a good defensive shield at +the communication level, as people can understand +that the company itself has also understood the +problem, limiting the damage; +Timing is essential during Firestorms, first of all to +understand whether the type of firestorm is real or artificial +(you can tell by the date of creation of the accounts that do +firestorm – if the initial accounts were born recently, they +are probably bots, hence artificial); secondly for improving +the cyber defence and be prepared for a possible cyber +attack; tertiary for the public reaction, because it means +that the affected company has noticed the failure faster or +as fast as other people (who are doing the firestorm on +social networks) and will promptly react to the problem, +reassuring customers that it will be solved. This will help in +calming down or extinguishing the firestorm. For example, +the carnivores vs vegetarians case at ING-DiBa was caused +by a communication failure. The company had never had so +much traffic on its Facebook page before, and they saw in this +an opportunity to increase the number of their followers. In +fact, after a few days had passed from the firestorm, and the +attackers were still posting, newly-acquired followers jumped +into the debate and started defending the company. [31] +Obviously, +depending +on +the +type +of +firestorm,real +or +artificial, it is necessary for the company to adapt its +strategies according to the type of attack (real or artificial). +The prevention part, of course, works in both cases, but +understanding who you are fighting against and the causes, +helps to save the reputation of the company, and sometimes +even the company itself. +VII. FUTURE WORK +In one of the next jobs, I would like to implement different +pressure dynamics, i.e., either implement rapid, massive, and +incisive firestorms, or permanent, with few accounts firestorm. +Depending on the firestorm, these types of dynamics can +change the pressure on companies and workers in different +ways, perhaps showing that for some companies it is better +to have a permanent firestorm, or for others a rapid one. +Another aspect I would like to draw attention to in future work +is also how people are contacted in the company, i.e. with +messages that are more likely to provoke an ethical reaction, +for example, when people are contacted by bots and they point +out to the worker the disaster he has made to his company. +This case is very interesting, as it is possible, after ’moralising’ +the worker, to apply social engineering strategies to facilitate +the cyber attack. On the other hand, on the side outside the +company, i.e. not focused on employees, strategies can be used +to increase the chance of a successful cyber attack, or extortion +of information or money. For instance, during the firestorm, +it is possible to contact the company under attack, and pose +as the national cyber security agency, initiating strategies such +as: +1) Passing themselves off as the national cyber security +agency, they say that most are fake accounts and get +information on their security; + +2) Passing themselves off as the national cyber security +agency, enter in their computer system. +3) Passing themselves off as the national cyber security +agency, saying they are carrying out a cyber attack +to test their cyber defences, carry out a second attack +immediately afterwards, exploiting the information from +the first attack and passing on part of the defences, or, +say they are not defending themselves against the first +attack so as to obtain the desired data. +In any case, these kinds of interactions will be carried out +by means of computer simulations, since for obvious ethical +reasons it is impossible if not extremely difficult to apply these +strategies. +VIII. CONCLUSIONS +In this paper, I have shown how some events related to +cyber security are linked to certain social dynamics. When +social dynamics are mixed and linked to cyber purposes, +classic attack types (cyber or social attack) can no longer be +defined, but social-cyber attacks, as the effectiveness of one +also induces a probability of success of the other. +I introduce an novel model allowing researchers and com- +panies to (1) understand when companies and organisations +have fragile defence against a social-cyber attack, (2) illustrate +how company and organisation can defence them self from +firestorm, (3) proving that social-cyber attack must be defined +as a possible high risk event as multi domain sector, and (4) +showing a now model of cyber attack, with a multidisciplinary +sociological approach to increase the potentiality of common +cyber attack. The data collected from CD project red’s event +case, shows how these types of attacks, although still little +known, may become a norm in the future, as the company’s +assets are not only its human capital, or the production of +goods and/or services, but also its own reputation. +IX. AUTHORS & PAPER INFORMATION +A. Data gathering +I collect tweets related to the topics #Cyberpunk2077 by +using Tweepy and the Twitter archive API. Both service use +the permission from Twitter to obtain and gather data, but +any downloaded topic need revisions and cleaning process to +increase the quality of the research. For example, I found +many copy-paste tweets (caused by spamming process, or +fake-account/bot), and also several tweets had (during the +Vader Sentiment Analysis) incomprehensible word for the +Vader program, and I deleted it. For any topic I use the +same methodology to obtained standard and quality data. In +addition, to obtain the correct amount of tweet (define as the +number of tweet) for each day/hour I use getdaytrends.com, a +specific site where it is possible to monitoring every topic in +real-time and also aged topic. In total, our data count more then +∼5000 Tweet. I obtain the Financial data of CD project RED +from https://www.investing.com/equities/cdproject-historical- +data site. +B. Author Contributions +Investigation and data resources, methodology, data cleaning +and software, A.R.; All authors have read and agreed to the +published version of the manuscript. +C. Funding +The author(s) disclosed receipt of the following financial +support for the research, authorship, and/ or publication of this +article: This project has received funding from the University +of Catania. +D. Author biographies +Andrea Russo is a PhD candidate in Complex Systems +at the University of Catania. He is currently working at the +Department of Physics and Astronomy. He collaborated with +CNR Ibam, he also has worked purely on projects involving +technology and society. +His +main +research +field +and +interests +are +focused +on +the study and the development of Computational social +method to explain social complexity, in particular field like +Politics - Economics - Business and Defense-Security sector +applications. +Orchid: 0000-0003-3816-0539 +Corresponding author. Email: Andrea.russo@phd.unict.it +I would like to thank "Vereos" and "Andrea metal clone", +who helped me in idealising and refining the paper. +REFERENCES +[1] E. B. Alan Bryman. Business research methods, 2nd ed.oxford: Oxford +university. Oxford University Press, 2007. +[2] L. Bakos, D. D. Dumitras,cu, and K. Harangus. +Human factor pre- +paredness for decentralized crisis management and communication in +cyber-physical systems. Sustainability, 11(23):6676, 2019. +[3] G. Carrer and F. Bechis. Così la cina fa propaganda in italia, con i bot. +ecco l’analisi su twitter di alkemy per formiche. Formichiere.it, page 1, +2020. +[4] E. Cartwright, J. Hernandez Castro, and A. Cartwright. +To pay or +not: game theoretic models of ransomware. Journal of Cybersecurity, +5(1):tyz009, 2019. +[5] M. C. Ciccarelli. Rebuilding employee trust after a scandal. Human +resources executive, 2018. +[6] K. Creighton. How to restore employee trust after a very public company +scandal. hrdailyadvisor, page 1, 2019. +[7] C. Criddle. Cyberpunk 2077 makers cd projekt hit by ransomware hack. +bbc.com, 2021. +[8] D. D. Cdproject hacked, gwent source code leaked. eip.gg, 2021. +[9] N. Dawar and M. M. Pillutla. Impact of product-harm crises on brand +equity: The moderating role of consumer expectations. +Journal of +marketing research, 37(2):215–226, 2000. +[10] J. Day. Nike: ’no guarantee on child labour’. The Guardian, 2001. +[11] M. Farrell. High speed trading fueled twitter flash crash. CNN Business, +2013. +[12] FrancescoArruzzoli. +“il ruolo della cyber threat intelligence nelle +organizzazioni” - zoom. +[13] G. Giovanni and A. Russo. Profilazione sociale e sicurezza nazionale. +SOCINT Press, 2021. +[14] G. Halkos and D. Bousinakis. +The effect of stress and satisfaction +on productivity. International Journal of Productivity and Performance +Management, 2010. +[15] N. Hansen, A.-K. Kupfer, and T. Hennig-Thurau. Brand crises in the +digital age: The short-and long-term effects of social media firestorms on +consumers and brands. International Journal of Research in Marketing, +35(4):557–574, 2018. + +[16] K. Hughes-Lartey, M. Li, F. E. Botchey, and Z. Qin. Human factor, +a critical weak point in the information security of an organization’s +internet of things. Heliyon, 7(3):e06522, 2021. +[17] iso.org. Iso 9001:2015. iso.org, 2015. +[18] iso.org. Iso/iec 27002:2022. iso.org, 2022. +[19] A. G. Kate Connolly and J. Henley. Chaos in germany and italy after +suspension of oxford vaccine. The Guardian, 2021. +[20] R. Knight. If your company is going through a public scandal, should +you leave? Harvard Business review, page 1, 2018. +[21] N. Kolli, N. Balakrishnan, and K. Ramakrishnan. On quantifying pre- +dictability in online social media cascades using entropy. In Proceedings +of the 2017 IEEE/ACM International Conference on Advances in Social +Networks Analysis and Mining 2017, pages 109–114, 2017. +[22] P. Langlois. 2020 data breach investigations report, 2020. +[23] J. Lehtonen. Kriisiviestintä. Mainostajien liitto, 1999. +[24] Lorenzo. Cd projekt red uses dmca to take down tweets sharing stolen +game code, 2022. +[25] G. Mazzarolo and A. D. Jurcut. Insider threats in cyber security: The +enemy within the gates, 2019. +[26] K. McLeod. Workers left destitute after hes scandal say bosses had cash +to pay wages but refused. dailyrecord, page 1, 2019. +[27] M. Monkey. Twitter users not lovin’ mcdonald’s. The Guardian, 2012. +[28] K. Nuortimo, E. Karvonen, and J. Härkönen. Establishing social media +firestorm scale via large dataset media analytics. Journal of Marketing +Analytics, pages 1–10, 2020. +[29] U. D. of Justice. Report on the investigation into russian interference in +the 2016 presidential election. Department of justice, 2019. +[30] J. Oliver. Learning the lessons of brent spar saga. Politico, 1995. +[31] J. Pfeffer, T. Zorbach, and K. M. Carley. +Understanding online +firestorms: Negative word-of-mouth dynamics in social media networks. +Journal of Marketing Communications, 20(1-2):117–128, 2014. +[32] F. M. Rinaldi, G. Giuffrida, and T. Negrete. Real-time monitoring and +evaluation-emerging news as predictive process using big data-based +approach, 2017. +[33] R. R. Riverso. +Barcellona, arrestato l’ex presidente Bartomeu, Mar. +2021. +[34] S. Samonas and D. Coss. The cia strikes back: Redefining confidentiality, +integrity and availability in security. +Journal of Information System +Security, 10(3), 2014. +[35] J. Schreier. +Cd projekt ransomware hack severely disrupts work on +cyberpunk updates. bloomberg.com, 2021. +[36] I. senate USA. Background to “assessing russian activities and inten- +tions in recent us elections”: The analytic process and cyber incident +attribution. USA Senate, 2017. +[37] C. Simonelli. Prima educare, poi comprare. il fattore umano nella lotta +al ransomware. Formiche.net, page 1, 29 maggio 2021. +[38] K. Strauss. How volkswagen rallied its employees after its emissions +scandal (at least for now). Forbes, page 1, 2017. +[39] M. Tang and X. Mao. Information entropy-based metrics for measuring +emergences in artificial societies. Entropy, 16(8):4583–4602, 2014. +[40] Topic. Cd projekt red source code reportedly sells for millions in dark +web auction [updated] | ars technica, 2022. +[41] A. User. #cyberpunk2077hype • united states • twitter trending hashtag, +2022. +[42] Wikipedia contributors. +Cyberpunk 2077. +https://it.wikipedia.org/w/ +index.php?title=Cyberpunk_2077&oldid=130410919. +Accessed: NA- +NA-NA. +[43] C. C. Wood and W. W. Banks. +Human error: An overlooked but +significant information security problem. Comput. Secur., 12(1):51–60, +feb 1993. + diff --git a/INAzT4oBgHgl3EQfjf2_/content/tmp_files/load_file.txt b/INAzT4oBgHgl3EQfjf2_/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b684f017846cb8c1f7d7f82aea256057cc7d077b --- /dev/null +++ b/INAzT4oBgHgl3EQfjf2_/content/tmp_files/load_file.txt @@ -0,0 +1,670 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf,len=669 +page_content='Organised Firestorm as strategy for business cyber-attacks Andrea Russo Department of Physics and Astronomy, University of Catania, Italy Email: andrea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='russo@phd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='unict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='it Abstract—Having a good reputation is paramount for most or- ganisations and companies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, having an optimal corporate image allows them to have better transaction relationships with various customers and partners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, such reputation is hard to build and easy to destroy for all kind of business commercial activities (B2C, B2B, B2B2C, B2G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A misunderstanding during the communication process to the customers, or just a bad communication strategy, can lead to a disaster for the entire company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This is emphasised by the reaction of millions of people on social networks, which can be very detrimental for the corporate image if they react negatively to a certain event.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This is called a firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In this paper, I propose a well-organised strategy for firestorm attacks on organisations, also showing how an adversary can leverage them to obtain private information on the attacked firm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Standard business security procedures are not designed to operate against multi-domain attacks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' therefore, I will show how it is possible to bypass the classic and advised security procedures by operating different kinds of attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I also propose a different firestorm attack, targeting a specific business company network in an efficient way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Finally, I present defensive procedures to reduce the negative effect of firestorms on a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Index Terms—Firestorm, Cyber-attack, Business Defence, Socio-dynamics, Stress Test, Network Science, Cyberpunk 2077.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' INTRODUCTION Before the advent of social medias, brand crises were largely caused by journalists’ contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Nowadays, a firestorm is a cluster of consumers’ digital word of mouth that highlights some communication error, or some terrible mistake made by a company [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The Cambridge dictionary1 defines the firestorm as “a sudden, and sometimes violent reaction” and the shitstorm as “a wildly chaotic and unmanageable situation, controversy, or sequence of events”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In this paper, I will use both these terms interchangeably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' During the last years, many firestorms took place on the Internet [19], [27], [31], mainly due to the increase of the number of users on social networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In some cases, firestorms have been formally studied to better understand this phe- nomenon [15], [28], [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In 2007, several researchers debated over firestorms, and one of the main outcomes is that “a natural science model of the research process is suitable for studying the social world but a central issue remaining of whether the social world can, and should be, studied according to the same principles, procedures, and philosophy as the natural sciences” [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This is relevant because today I are actually able to study and evaluate social dynamics by using 1https://dictionary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='cambridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org the massive amount of data coming from the digital world, with particular emphasis on social networks [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Firestorms are not made of a single event with a standard behaviour, instead they are caused by non-linear dynamics leading to complex behaviours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Due to this, companies must have appropriate procedures to respond to various crisis situa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Lehtonen’s theory [23] shows that a firestorm develops in five stages: (1) latent stage, where weak signals of the upcoming crisis are received;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' (2) triggering event, where the subject becomes the target of news and social media attention;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' (3) the subject is in the top-news and the media attention spikes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' (4) the media attention calms down to the level of general philosophical and ethical discussion;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' and (5) there are only minor media hits and attention is guided to other issues [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' As firestorms begin when there is a service failure, a social failure or when a company fails to communicate prop- erly [15], this kind of errors can be reduced by following appropriate procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, most of the existing quality and security procedures, such as the ones suggested by ISO 9001:2015 [17] and ISO/IEC 27002:2022 [18] are not ade- quate for a multi-domain cyber and social attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Because, regard to the 27002:2022, social attacks are outside the scope, while, 9001:2015, even if it focuses on better business process quality, thus, less firestorm risk from the public, it does not mitigate the firestorm from an attacker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hence, in this paper I theorise that it is possible for an attacker to intentionally cause a firestorm attack to undermine the reputation of a company, with the side-effect of advan- taging the competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I argue that self-organised Firestorm attacks require a high number of bots that are already active on social medias: in this case, bots start the firestorm on the target company, spreading fake news (or magnifying a certain event, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=', a mistake made by the company in the past) that will cause a high volume of real people to react negatively and continue the social attack, unknowingly on behalf of the adversary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Additionally, I argue that Open Source Intelligence (OS- INT) could allow an adversary to identify weak spots in the organization, namely people who most likely cannot react properly or defend themselves from the firestorm, hence not being able to timely mitigate its impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Many workers have a LinkedIn, Facebook, or Twitter account: moving the firestorm on the social media accounts of people who work for the target company can lead to an extremely stressful situation for arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01518v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='CY] 4 Jan 2023 workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This could be even worse for people who do not often deal with public relations, and could cause confusion, panic and distress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, when a firestorm arises, even people who work on communication processes and managers can panic, and the fear of losing customers and partners can be very detrimental for any company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' When people working in the target firm are in this altered status, I argue it is possible to elaborate a social engineering strategy to capture protected information: in this case, not only firestorms serve the purpose to undermine the corporate image, but they are also used as a diversion for a social engineering attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, while most important organisations adhere to best-practices listed in security standards like ISO/IEC 27002:2022 [18], during a social attack like firestorms, some best-practices and procedures may be distorted or bypassed, both intentionally or by mistake, due to the pressure applied to people who are in charge of complying to such procedures [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The paper makes these contributions: 1) I explain how to make an automated and organized firestorm attack, with only a few manual operations such as the choice of a topic and of a hashtag;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) I introduce a taxonomy of possible actions that the attacker could perform while doing the firestorm;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) I illustrate how the author of a firestorm can evade detection for their attack by targeting single workers instead of the company profiles, while increasing the damage done to the firm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 4) I show possible long and short term procedures that a company can implement to mitigate the effect of firestorms attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' CYBER-ATTACK PLANING PRELUDE In this section, I illustrate a novel strategy to artificially cause a firestorm, leveraging a botnet to start agitating real people against a target company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Due to the large number of posts that bots can create within seconds, they can be used to amplify any idea on social networks, influencing political affairs [3] and business company value [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For example, due to a cyber-attack on a Twitter newspaper profile, such newspaper shared a fake news about President Obama being injured by a bomb in the White House, causing a flash-crash in Wall Street and the stop all of economic transactions for some minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This led to a loss of about 121 billion dollars for S&P 500 and its related companies [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I structure the attack plan in six stages: 1) Finding an event/topic to build the firestorm attack on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This can be a past event or an error that the firm has committed in the past, which will be used as a basis for the upcoming attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I define this event as the target topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Using bots to create or amplify the latent state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' By leveraging a botnet, an adversary can create a high num- ber of posts on social media, allowing the target topic to reach more people and giving them the opportunity to react negatively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This can eventually lead to a state where real people start to autonomously talk about the subject and begin to spread information about the target topic on their own.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' To facilitate this, the attacker can reuse an old trending hashtag or create a new one: the hashtag is the keyword to incite social action due to the information symbolised by the word itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Letting the topic spread among people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The ideal situation for the attacker is that real people begin posting about the target topic, after learning about it from the botnet’s posts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This will bring more attention to the topic, possibly making it a trending one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For example, Twitter allows users to check what topics and hashtags are currently popular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' If this happens, there will be moment in which there are enough people posting about the target topic, so that the firestorm can sustain itself for days, without any other post coming from the attacker’s botnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I call this moment the fire point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2 Instead, if real people did not react negatively to the topic, or the topic did not reach enough people to allow the firestorm to reach the fire point, the discussion on the topic will slow down and will eventually end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In this case, I say that the firestorm is extinguished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, the attacker can change the target topic and restart from Stage 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 4) Identifying human targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Managers (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=', Chief Tech- nical Officers, Chief Executive Officers) are the decision makers of a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The attacker might want to keep a list of these people in order to use these names when the attack will move over from the company’s social network profiles to the employees’ ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Identifying the people who are most proud to work for the attacked company can also be helpful in exerting more pressure on the company (since they have more to do with the value of the company).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 5) Focusing on workers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' During the peak activity of the firestorm, those same bots that built the latent state will move their focus on the public social media profiles owned by employees of the attacked firm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' These pro- files were identified in the previous step of the attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This may cause the attention of the firestorm to shift towards the employees, also causing them to experience discomfort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Because the brand is usually at the center of the firestorm, focusing people will have a stronger impact on them, and it can disrupt internal processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 6) Performing the cyber attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Because people will put less attention in following internal procedures, many safety best-practices adopted by the company may not be followed properly, or may even be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The attacker can exploit this behaviour to their own advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In order to shift the focus from the company to the worker, it is necessary to optimise the timescale and timing of the transition, as it is not linear for people to attack the worker, but it can happen more easily if the negative event is of high negative impact and value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Shifting the attack on employees 2In chemistry, the fire point is the lowest temperature at which a certain fuel will continue to burn for a minimum of five seconds, when ignited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' has another side-effect, which is beneficial to the attacker: the organisations that are responsible for the public cyber security in every country cannot see the Firestorm attack on the company page, because the Firestorm is focused on workers only Such organisations will hardly be able to detect all comments and posts focused on workers, allowing the attacker to create a smoky form of the attack, which can bypasses conventional security measures, procedures and strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Since they have to focus primarily on the company under attack, therefore, possibly not give so much attention to analysing every single interaction against all the operators of the attacked company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' BUSINESS SOCIAL MOOD-DISEASE AND NETWORK STRATEGY The Cambridge Analytica case highlighted the role and the importance of social media for the majority of the population and organisations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A document produced by the American Ministry of Justice, to examine the possible foreign influence on US, showed how there actually exist organisations (such as the IRA - Internet Research Agency) [36] that aim to influence individuals, public and private organisations [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A great part of what is needed to successfully influence people lies to understand the initial conditions of the system, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' in the correct profiling of such people through data obtained on social networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' People who are more sensitive to certain issues, and those key people who can influence the most the community where they live and work are the main focused people for a social attack, because they have a central role (hubs) in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Profiling consists in obtaining (through a process of data collection and subsequent processing) an absolute or almost absolute understanding of a group of individuals or a single person, comprehending their habits and preferences [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The information obtained concerns political, musical and social in- terests, including the identification of their network of friends, colleagues, and much more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This information allow a much easier conveying of any content, as it is possible to understand who is most susceptible and interested on various topics, affecting their weaknesses, fears and interests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Furthermore, it is possible to infer who could possibly propagate a certain content through their network, exponentially increasing the chance of success if the subject in question is a person with an important or main role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cambridge Analytica used the OCEAN model, related to personality traits, to understand preferences of many people in the US during the national election on 2016 [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The OCEAN model allows to send specific messages and contents to people who are sensible to a certain topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This method is very different from the classic and standard mass commu- nication, because it is possible to send the right content to the right person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Unfortunately, the CA scandal was defined as classic political influence, the old-fashioned way, thus including prostitution, favouritism, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In reality, the scandal found “a new type of weapon” as Brittany Kaiser (former CA business development director) said during her question time (on Commons culture committee in 2018) to describe the work done from CA, but also to categorize AI as a real soft-power weapon [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, understanding hot topics for workers is not enough – in order to modify their mood and obtain a good social attack, a subject topic needs to be found as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' On social networks, during firestorms , people are usually triggered by three kinds of errors [15]: 1) Social failure 2) Communication failure 3) Product or service failure Although they may seem similar, different types of events can lead to different types of dynamics and reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In the case of product or service failures, for example, performance- related crises raise doubts about the brand’s ability to deliver basic functional performance [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Another research has also identified not only short-term effects to a brand after a firestorm, but also measured long-term ones, at least two years after the latest firestorm [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I hereby give an example for each of the aforementioned triggering factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 1) Social failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The firm might be an accomplice of some accident or crime, like Nike with children shoes [10], [30] or the ING-DiBa case in 2012 [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Communication failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The firm might fail to commu- nicate properly, for example making negative comments regarding a certain community or movement [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Product or service failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The firm might distribute a product that harms consumers, for example a vaccine that can kill people [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' These failures and the firestorm stemming from them might cause affected employees to experience discomfort and panic, because coworkers, friends and other people in their net- work might see affected employees as the root-cause of the Firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The social-cyber attack also provokes unlikely passive con- sequences for companies: 1) The value of the company on the financial market could rapidly decrease;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [11] 2) People who worked in the company during the firestorm might be subject to discrimination in future, especially if the firestorm was caused by a (supposedly) unacceptable mistake that could have been avoided [26], [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) As the people, also the offended brand could carry a long-term stigma that would motivate other companies to make job offers to the personnel of the attacked firm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This could put it on an even greater disadvantage, as workers would be incentivized to leave the attacked company and accept the new offer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The network, as well as the importance and scope of the news, can thoughtfully influence the reaction and dynamics of the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The network, as well as the importance and scope of the news, can thoughtfully influence the reaction and dynamics of the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For example, when a company’s workers receive an high importance news, they may behave helplessly in relation to the importance of the news;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' feeling relieved of responsibility, since the event is bigger than their actions, they tend to pass much of the responsibility on to the company’s managers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Indeed, in times of disorder or chaos, Entropy increases with decreasing order, and emergency increases with increasing order: this happens because people within the organisation understood the emergency, and the organisation improve them- self to respond to it [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' When many workers in the company are panicking, the organisation’s CCO (Chief Communication Officer) will elab- orate and react to Firestorm on company pages, however, this cannot stop the social attack on the individual profiles of the employees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hence, even people who are in charge of running communication processes and managers can panic, as the more is the duration of the firestorm, the higher is the chance of losing clients and reputation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This is a terrible situation for any company, especially after many years of work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, managers are considered "critical workers" on the organisation chart, hence, they cannot be influenced by social manipulations and social diseases, because of the responsibilities they have in the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' While during the last century such organization charts had the form of a pyramid, usually with the CEO on the top, nowadays the AGILE model allows companies to organise their personnel in different ways within their organization charts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, the legal and personal responsibility for every error or critical issue will be always be of the top manager of that area – for example, the CISO (Chief Information Security Officer) is usually responsible for the cyber security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A network side strategy can hard-influence workers close to managers and directors, contaminating directly the mood of the team, including the manager.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In a more specific way, the attacker the hub from the company network, defusing also other workers from the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Once the social-disease is already widespread on the company, and many people are stressed about the firestorm, the cyber attack can begin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' ASSESSING THE ATTACK SURFACE In this section, I introduce the possible actions that the adversary (or the real people that contribute to firestorm) can perform to further disrupt the target company’s business processes, to sink its corporate image, or to get classified information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' To do so, I introduce a novel classification of these actions and analyze their impact on the fundamental properties of information security, that is, Confidentiality, Integrity and Availability [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I show these actions can be divided in three categories: 1) Controlling Large Scale Entities, that is, thousands or even millions of different actors performing several concurrent actions against a firm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' These actors can act both remotely and physically, and can be both robots and humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Leveraging Internal People, namely, exploiting mis- takes performed by employees (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=', because they are stressed due to the firestorm), or having an insider threat who can extract classified information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Asking for Ransoms, that is, the adversary may want to ask for a payment to stop the firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This would cause the bots to be shutdown, or even to defend the company on social medias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I hereby analyse the different actions within each category and their impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This analysis is summarised in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Controlling Large Scale Entities a) Denial of Service (DoS) Attacks: The adversary might want to harm the firm’s reputation by negating the availability of the services it offers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' To this avail, the attacker can leverage botnets to send a very high number of requests per second to the target service, overwhelming the server and resulting in the service going down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' If possible, the attacker could even reuse the botnet used to create the latent state, and rearm it with a DoS script.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Alternatively, if the adversary is not a single entity but a large group of organised people, a DoS attack can be performed with simple scripts, without leveraging any botnet, as the large number of adversaries could be able to generate the traffic required to overload the server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In this case, however, the adversaries would have to carefully time their attack, and they might want to hide their location, for example by using a VPN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Finally, the adversary could encourage real people to overload the target firm’s servers, as they could co-ordinate the attack by using the bot profiles used for the hashtag propaganda.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' b) Physical Actions: Business processes can be also interrupted or slowed by legal, yet harmful, physical actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' One example is a demonstration around the firm’s premises: employees might not get to their workplace in time because people manifesting outside the building are blocking or slow- ing access to the premises, or they are creating more traffic than usual on the way to the building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Another example is people calling the organisation’s call centers with the only goal of protesting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Leveraging Internal People a) Human Error: Even though it is widely known that human error is one of the most prominent causes of security incidents [16], [43], most companies still do not adequately invest in training for their personnel, resulting in data breaches or other security related events [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This means that, if the attacker wants to obtain an initial foothold on the target organization’s systems, they might be able to do so without needing a firestorm attack, depending on the employees’ abil- ity of recognizing phishing emails or scam websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, workers who are experiencing firestorm, be it on the company they are working with or on their own profile, will be more inclined to break internal policies, hence committing mistakes, due to the perceived crisis [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' b) Offering Help: During the firestorm’s peak activity, the adversary itself contacts the attacked firm, pretending to be a professional (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='g, a consultant) who can help in mitigating the effects of the firestorm, for example as a Social Media Manager who has dealt with Firestorms before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This can happen via emails, social networks or through the corporate’s website, for example if the firm has some job openings and the adversary pretends to be a candidate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For smaller enterprises, the adversary may even show up in person to the attacked company’s premises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' If the attacker manages to get hired, they might get access to classified information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I argue the attacker does not want to tamper with documents or attack the firm’s infrastructure while being an employee themselves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' c) Insider Threats: Instead of joining the firm them- selves, the adversary might establish a contact with employees who are still in the attacked company but are not showing support on social media, or even manifested dissatisfaction towards the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The attacker might want to try to persuade them in sharing confidential information, making them insider threats [25] – if they have success, not only they acquire classified information, but if the stolen content is also compromising for the firm, it could be published online to damage the firm’s reputation even more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Asking for Ransoms a) Extortion to Stop the Attack: The adversary contacts the attacked firm and proves the botnet that is performing the firestorm is in their control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' They then ask for an arbitrary amount of money in Bitcoins to shutdown the bots, stopping a (hopefully) substantial part of the attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, if the firestorm already managed to incite many people in joining the social attack, the shutdown of the botnet might not stop or slow down the firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' If the adversary plans to attack multiple firms with their firestorms, they to avoid situations like this, because the odds of a victim paying a ransom is proportional to the reliability of the attacker in stopping the attack once they receive the money.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In other words, the attacker must be considered “trusted” in stopping the attack if the ransom is paid, so victims are more incentivized to pay [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' b) Defence as a Service: The adversary contacts the attacked firm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' but instead of showing they are in charge of running the attack and asking money to stop it,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' they try to sell a fire(storm)fighter service to the victim,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' supposedly consisting on bots defending the reputation of the firm: this is basically a reversed firestorm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' in which those same bots that built the latent state now defend the company: to avoid drawing excessive attention,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' the attacker might slowly change the proportion of attacking bots versus defending ones,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' until they are all defending the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' CASE STUDY: CD PROJEKT RED On December 10, 2020, CD PROJEKT RED released a long awaited game called Cyberpunk 2077.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This game was very popular even before its release and it generated continuous social hype from the video game community throughout its development, also winning the “Best Game Awaited” from Golden Joystick Awards for two consecutive years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [42] As shown on Figure 1 and Figure 2, hype for the game substan- tially increased during the 10 days before the release of the game, reaching its apex on December 10, when the hashtag #Cyberpunk2077 was tweeted 193,900 times on Twitter, TABLE I SOCIAL ATTACK SURFACE ASSESSMENT Category Action Impacts Confid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Integ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Avail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Large Scale DoS Attack No No Yes Yes Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Actions No No Yes Yes Internal People Human Error Yes Yes Yes Yes Help Offer Yes No No No Insider Threat Yes No No Yes Ransoms Extortion No No No No Defence Service No No No No Confid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' : The action can affect the Confidentiality property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' | Integ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' : The action can affect the Integrity property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' | Avail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' : The action can affect the Availability property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' | Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=': The action can negatively affect the reputation of the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' from users of 53 different nationalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' During this time span, many other hashtags regarding the game were very popular, for example #Cyberpunk2077Hype was retweeted 10,000 times [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, a few days after the release , the Cyberpunk 2077 topic arise again, this time associated with queries related to patches and refunds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, the game was released too early and many bugs were present: due to this, several people had asked a refund to CD PROJEKT RED, often also writing a bad review for the game on online stores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This created a "information-disease" within the company, just like the one described in Section III: in this case, CD PROJEKT RED’s employees became stressed and felt pressure related to the quality of Cyberpunk 2077, in which they had invested more than two years of hard work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [42] In early February 2021, only 60 days after the game’s release, CD PROJECT RED was hit by a ransomware attack and attackers were able to extract the source code of several games, including administrative files [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The attackers then threatened the company of leaking or selling the stolen code and files, unless the firm paid a large amount of money to the cyber-criminals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In the end, CD PROJECT RED refused to negotiate with the attackers, stating on a press release that they would “not give in to demands or negotiate with the actor”, also confirming that no personal information was obtained in the attack and that they were working with law enforcement to track down the attackers [7], [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Later on, security analysts found the stolen source code while being auctioned on the dark web for a minimum price of 1 million USD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [40] The auction was closed after the attackers stated they had received an offer that satisfied them [40] Within a week of these auctions, the code was shared online via social media, and CD PROJECT RED began using DMCA take down notices to remove posts containing their code [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The social hype that CD PROJEKT RED generated for Cyberpunk 2077, was used by hackers to threaten the company in order to extorting money, but also, had a side effect, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' damaging the company’s reputation, that can bring to undermine the sales of other long awaited games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In Table II I show the results of the sentiment analy- sis, obtained from tweets and comments for the hashtag #CDprojectRED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Data collected from Twitter respects the timeline of Cyberpunk 2077’s release and its development;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' data shown in the table can be organised in three categories: before release (October and November), during release (De- cember and January) and after the release of Cyberpunk 2077 (February).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' It is possible to observe that in October and November the sentiment remained neutral-positive with a few oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In December, when the game was released, I can observe a small increase in the negative sentiment due to the high number of bugs present in the game, however, this increment is quite negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In January, when a greater number of players were playing the game, the negative sentiment became stronger than the positive one, causing not only a negative compound (- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='111), but also a neutral-negative sentiment for the game and for the developers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Finally, on February the sentiment returned neutral overall, however, the presence of negative sentiment is still stronger compered to the one in October and November.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' These data show how much pressure the CD PROJEKT RED company had to experience during the release of the game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Additionally, in Figure 3, I show the financial value of the company during the whole game release timeline, also marking the two critical events that occurred: the yellow line indicates the release of the game, while the red line indicates the ransomware attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I can see that, after the release of the game, the financial value of the company suffered a sudden drop, that was likely conditioned by customers losing trust in the company due to the presence of many bugs in the game, bad reviews and critics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I can see that the company regains more than half the value lost during the next two months, however, the ransomware attack causes another drop in the financial value of the company due to customers losing trust in the company again, this time from a security perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' TABLE II VADER SENTIMENT ON #CYBERPUNK2077 FROM TWITTER Months Negative Neutral Positive Compound October 0,085 0,757 0,150 0,163 November 0,079 0,766 0,149 0,163 December 0,087 0,750 0,161 0,153 January 0,143 0,758 0,093 0,111 February 0,104 0,745 0,145 0,120 VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' BUSINESS DEFENCE STRATEGY To avoid dangerous events for companies, human factor is a crucial element [37], however it is also possible to create specific defence strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Failures introduced in Section III, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' social failures, communication failures and product or service failures can be analysed to prevent incidents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' To the most of us, the news that a particular piece of information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' a meme, a hashtag) went “viral”, reaching millions of nodes in a short period of time may seem purely random and hence unpredictable, but Kolli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [21] discovered that, at least 20% of the times, the cascade volume changes in a manner that appears to be random, and in the remaining 80% it is possible Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Interest Score showing social hype for the release of Cyberpunk 2077 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Queries showing social hype for the release of Cyberpunk 2077 to predict the cascade’s future volume.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hence, it is possible to create short-term strategies to detect firestorm attacks while they are still in the early stages, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' while the latent state is being built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' However, it is also possible to create long-term defence strategies with a proactive governance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' A possible proactive strategy for the long-term could be as follows: 1) Organise internal company procedures to help employ- ees protect themselves against various attacks on social media (like Linkedin);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Organise procedures outside the company, such as con- Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Financial value of CD PROJEKT RED and critical events ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='120 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='score ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Internet search s ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='07/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='09/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='13/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='15/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='17/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='21/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='23/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='25/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='27/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='29/11/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01/12/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='/12/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='05/12/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='07/12/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='09/12/2020 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='19/11/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03/20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Query search score ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='16 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='date ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='date500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='450 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='350 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Financial value ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='250 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Development ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Game release ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='Ransomwere attack ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 07.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2020 07.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='2021tacting allied/partner companies for help with the various attacks on social media;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Create in advance supporting bots that will defend the company automatically;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 4) Create an international database of accounts that have made firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The database, accessible to all organi- sations, both public and private, will help to understand whether the type of firestorm taking place is real or artificially created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [12] These three possible actions can be highlighted by the mass media, which will publicly show that the firestorm is being fought because other people or organisations began defending the attacked company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hence, these actions allow the firestorms to calm down, and eventually to be extinguished, faster than simply doing nothing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [15] If a company has done something enormously wrong in the past, it is possible that every time the same company does something wrong, there is a chance that another firestorm can restart, either for the recent event or also for the past one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, the firestorm can come back with an interval of about 2 years [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In case of social failures, there is also an additional side- effect that must be mitigated, that is, the firestorm naturally expands to the employees without the manipulation of the adversary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Example defence strategies against this side-effect could be implemented as follows: 1) Let people from outside and inside the company on social network, dialogue about that topic (such as the case of carnivores vs vegetarians at ING-DiBa [31]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This strategy can increases the number of followers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Blame an entity that is external to the company as a scapegoat, so the Firestorm can move from the company to the designed entity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Even if it is not very moral, it is something that usually works;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Depending on the strength, length, and breadth of the attack, it is possible to make strategy about possible reaction for company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' a) Social failure: If the firestorm is linked to a partner company, or only a certain sector of the company is under attack, immediately distance yourself from them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' b) Communication failure: The goal here is to safe- guard the company’s reputation and authority.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In this case, try to detach yourself immediately from the communication error, and continue with the company’s reputations strategy, making it appear that it was just an accident on the road.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Further- more, apologising for the event never hurts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' c) Product or service failure: Instantly block the pro- duction of the affected product or the provision of the service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Organise a commission that can evaluate the quality of product/service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Even if it is complicated given the amount of partners, quality standards and corporate continuity, this action, if done in time, creates a good defensive shield at the communication level, as people can understand that the company itself has also understood the problem, limiting the damage;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Timing is essential during Firestorms, first of all to understand whether the type of firestorm is real or artificial (you can tell by the date of creation of the accounts that do firestorm – if the initial accounts were born recently, they are probably bots, hence artificial);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' secondly for improving the cyber defence and be prepared for a possible cyber attack;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' tertiary for the public reaction, because it means that the affected company has noticed the failure faster or as fast as other people (who are doing the firestorm on social networks) and will promptly react to the problem, reassuring customers that it will be solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This will help in calming down or extinguishing the firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For example, the carnivores vs vegetarians case at ING-DiBa was caused by a communication failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The company had never had so much traffic on its Facebook page before, and they saw in this an opportunity to increase the number of their followers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In fact, after a few days had passed from the firestorm, and the attackers were still posting, newly-acquired followers jumped into the debate and started defending the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [31] Obviously, depending on the type of firestorm,real or artificial, it is necessary for the company to adapt its strategies according to the type of attack (real or artificial).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The prevention part, of course, works in both cases, but understanding who you are fighting against and the causes, helps to save the reputation of the company, and sometimes even the company itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' FUTURE WORK In one of the next jobs, I would like to implement different pressure dynamics, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=', either implement rapid, massive, and incisive firestorms, or permanent, with few accounts firestorm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Depending on the firestorm, these types of dynamics can change the pressure on companies and workers in different ways, perhaps showing that for some companies it is better to have a permanent firestorm, or for others a rapid one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Another aspect I would like to draw attention to in future work is also how people are contacted in the company, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' with messages that are more likely to provoke an ethical reaction, for example, when people are contacted by bots and they point out to the worker the disaster he has made to his company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' This case is very interesting, as it is possible, after ’moralising’ the worker, to apply social engineering strategies to facilitate the cyber attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' On the other hand, on the side outside the company, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' not focused on employees, strategies can be used to increase the chance of a successful cyber attack, or extortion of information or money.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For instance, during the firestorm, it is possible to contact the company under attack, and pose as the national cyber security agency, initiating strategies such as: 1) Passing themselves off as the national cyber security agency, they say that most are fake accounts and get information on their security;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2) Passing themselves off as the national cyber security agency, enter in their computer system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 3) Passing themselves off as the national cyber security agency, saying they are carrying out a cyber attack to test their cyber defences, carry out a second attack immediately afterwards, exploiting the information from the first attack and passing on part of the defences, or, say they are not defending themselves against the first attack so as to obtain the desired data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In any case, these kinds of interactions will be carried out by means of computer simulations, since for obvious ethical reasons it is impossible if not extremely difficult to apply these strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' CONCLUSIONS In this paper, I have shown how some events related to cyber security are linked to certain social dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' When social dynamics are mixed and linked to cyber purposes, classic attack types (cyber or social attack) can no longer be defined, but social-cyber attacks, as the effectiveness of one also induces a probability of success of the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I introduce an novel model allowing researchers and com- panies to (1) understand when companies and organisations have fragile defence against a social-cyber attack,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' (2) illustrate how company and organisation can defence them self from firestorm,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' (3) proving that social-cyber attack must be defined as a possible high risk event as multi domain sector,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' and (4) showing a now model of cyber attack,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' with a multidisciplinary sociological approach to increase the potentiality of common cyber attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The data collected from CD project red’s event case, shows how these types of attacks, although still little known, may become a norm in the future, as the company’s assets are not only its human capital, or the production of goods and/or services, but also its own reputation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' IX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' AUTHORS & PAPER INFORMATION A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Data gathering I collect tweets related to the topics #Cyberpunk2077 by using Tweepy and the Twitter archive API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Both service use the permission from Twitter to obtain and gather data, but any downloaded topic need revisions and cleaning process to increase the quality of the research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For example, I found many copy-paste tweets (caused by spamming process, or fake-account/bot), and also several tweets had (during the Vader Sentiment Analysis) incomprehensible word for the Vader program, and I deleted it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' For any topic I use the same methodology to obtained standard and quality data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In addition, to obtain the correct amount of tweet (define as the number of tweet) for each day/hour I use getdaytrends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='com, a specific site where it is possible to monitoring every topic in real-time and also aged topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In total, our data count more then ∼5000 Tweet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' I obtain the Financial data of CD project RED from https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='investing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='com/equities/cdproject-historical- data site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Author Contributions Investigation and data resources, methodology, data cleaning and software, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' All authors have read and agreed to the published version of the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/ or publication of this article: This project has received funding from the University of Catania.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Author biographies Andrea Russo is a PhD candidate in Complex Systems at the University of Catania.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' He is currently working at the Department of Physics and Astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' He collaborated with CNR Ibam, he also has worked purely on projects involving technology and society.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' His main research field and interests are focused on the study and the development of Computational social method to explain social complexity, in particular field like Politics - Economics - Business and Defense-Security sector applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Orchid: 0000-0003-3816-0539 Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Email: Andrea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='russo@phd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='unict.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='it I would like to thank "Vereos" and "Andrea metal clone", who helped me in idealising and refining the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' REFERENCES [1] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Alan Bryman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Business research methods, 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='oxford: Oxford university.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Oxford University Press, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Bakos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Dumitras,cu, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Harangus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Human factor pre- paredness for decentralized crisis management and communication in cyber-physical systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Sustainability, 11(23):6676, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [3] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Carrer and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Bechis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Così la cina fa propaganda in italia, con i bot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' ecco l’analisi su twitter di alkemy per formiche.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Formichiere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='it, page 1, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [4] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cartwright, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hernandez Castro, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cartwright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' To pay or not: game theoretic models of ransomware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Journal of Cybersecurity, 5(1):tyz009, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Ciccarelli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Rebuilding employee trust after a scandal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Human resources executive, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [6] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Creighton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' How to restore employee trust after a very public company scandal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' hrdailyadvisor, page 1, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [7] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Criddle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cyberpunk 2077 makers cd projekt hit by ransomware hack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' bbc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='com, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [8] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cdproject hacked, gwent source code leaked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' eip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='gg, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [9] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Dawar and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Pillutla.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Impact of product-harm crises on brand equity: The moderating role of consumer expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Journal of marketing research, 37(2):215–226, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Day.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Nike: ’no guarantee on child labour’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The Guardian, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Farrell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' High speed trading fueled twitter flash crash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' CNN Business, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [12] FrancescoArruzzoli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' “il ruolo della cyber threat intelligence nelle organizzazioni” - zoom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [13] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Giovanni and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Russo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Profilazione sociale e sicurezza nazionale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' SOCINT Press, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [14] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Halkos and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Bousinakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The effect of stress and satisfaction on productivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' International Journal of Productivity and Performance Management, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [15] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hansen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Kupfer, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hennig-Thurau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Brand crises in the digital age: The short-and long-term effects of social media firestorms on consumers and brands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' International Journal of Research in Marketing, 35(4):557–574, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [16] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Hughes-Lartey, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Li, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Botchey, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Qin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Human factor, a critical weak point in the information security of an organization’s internet of things.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Heliyon, 7(3):e06522, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [17] iso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Iso 9001:2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' iso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [18] iso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Iso/iec 27002:2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' iso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Kate Connolly and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Henley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Chaos in germany and italy after suspension of oxford vaccine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The Guardian, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [20] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Knight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' If your company is going through a public scandal, should you leave?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Harvard Business review, page 1, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [21] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Kolli, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Balakrishnan, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Ramakrishnan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' On quantifying pre- dictability in online social media cascades using entropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, pages 109–114, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Langlois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2020 data breach investigations report, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [23] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Lehtonen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Kriisiviestintä.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Mainostajien liitto, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [24] Lorenzo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cd projekt red uses dmca to take down tweets sharing stolen game code, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [25] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Mazzarolo and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Jurcut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Insider threats in cyber security: The enemy within the gates, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [26] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' McLeod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Workers left destitute after hes scandal say bosses had cash to pay wages but refused.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' dailyrecord, page 1, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [27] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Monkey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Twitter users not lovin’ mcdonald’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The Guardian, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [28] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Nuortimo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Karvonen, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Härkönen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Establishing social media firestorm scale via large dataset media analytics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Journal of Marketing Analytics, pages 1–10, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [29] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' of Justice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Report on the investigation into russian interference in the 2016 presidential election.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Department of justice, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [30] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Oliver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Learning the lessons of brent spar saga.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Politico, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [31] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Pfeffer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Zorbach, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Carley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Understanding online firestorms: Negative word-of-mouth dynamics in social media networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Journal of Marketing Communications, 20(1-2):117–128, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [32] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Rinaldi, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Giuffrida, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Negrete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Real-time monitoring and evaluation-emerging news as predictive process using big data-based approach, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [33] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Riverso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Barcellona, arrestato l’ex presidente Bartomeu, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Samonas and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Coss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' The cia strikes back: Redefining confidentiality, integrity and availability in security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Journal of Information System Security, 10(3), 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [35] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Schreier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cd projekt ransomware hack severely disrupts work on cyberpunk updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' bloomberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='com, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [36] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' senate USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Background to “assessing russian activities and inten- tions in recent us elections”: The analytic process and cyber incident attribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' USA Senate, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [37] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Simonelli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Prima educare, poi comprare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' il fattore umano nella lotta al ransomware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Formiche.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='net, page 1, 29 maggio 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [38] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Strauss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' How volkswagen rallied its employees after its emissions scandal (at least for now).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Forbes, page 1, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [39] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Tang and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Mao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Information entropy-based metrics for measuring emergences in artificial societies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Entropy, 16(8):4583–4602, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [40] Topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cd projekt red source code reportedly sells for millions in dark web auction [updated] | ars technica, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [41] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' User.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' #cyberpunk2077hype • united states • twitter trending hashtag, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [42] Wikipedia contributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Cyberpunk 2077.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' https://it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='org/w/ index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='php?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content='title=Cyberpunk_2077&oldid=130410919.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Accessed: NA- NA-NA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' [43] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Wood and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Banks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Human error: An overlooked but significant information security problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=' Secur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} +page_content=', 12(1):51–60, feb 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/INAzT4oBgHgl3EQfjf2_/content/2301.01518v1.pdf'} diff --git a/J9FIT4oBgHgl3EQfZytY/content/tmp_files/2301.11254v1.pdf.txt b/J9FIT4oBgHgl3EQfZytY/content/tmp_files/2301.11254v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..c98b27df46ecff428b845516bd45d82634a270db --- /dev/null +++ b/J9FIT4oBgHgl3EQfZytY/content/tmp_files/2301.11254v1.pdf.txt @@ -0,0 +1,1646 @@ + + + + +Femtosecond Laser Engraved 2D Tunable Optofluidic Liquid +Core/Air Cladding Channel Waveguides on PDMS +Sanyogita*, Amar Ghar and P. K. Panigrahi +Centre for Lasers and Photonics, Indian Institute of Technology, Kanpur-208016 (UP). +sanyogita.iitk@gmail.com + +We have demonstrated fabrication and characterization of 2D liquid based multimode optical waveguide structures over +Polydimethylsiloxane (PDMS) material based chip. Fabrication of two separate microsturures, one with width of 14 micron +and depth of 27 micron while the other with width as well as depth of 110 micron, was achieved by femtosecond laser +micromachining process. The dye solution is passed through the microstructure from one end to the other; wherein dye +solution acts as the core while PDMS and air act as cladding medium. The femtosecond laser micromachining parameters +are optimized in terms of laser power, pulse width, writing speed, focused beam size etc. Quality of fabricated +microstructures is confirmed by microscopic analysis. The confirmation of liquid core/air cladding based waveguide is +obtained through the spectral and modal analysis. The optical analysis has been done by using fluorescence light coupled +out from waveguide structures filled with different dye solutions. These waveguide structures give strong light +confinement and intense interaction between dye solution and pump light. The developed micro structures are tunable in +terms of intensity, wavelength and beam size. Such micro structures can be implemented in design and development of +lab-on-chip micro lasers and sensing applications in any multifunction lab-on-chip devices. +Introduction +Optofluidic is a great research platform where the advantages of +both optics and microfluidics can be combined in a single chip to +move towards highly compact, portable and multifunctional devices +[1]. This optofluidic lab-on-a-chip (LOC) approach provides a huge +potential in terms of low-cost optical sources, sensors, liquid-liquid +waveguide, liquid core waveguide and real time detection. +Particularly in photonic science, and more specifically in the micro +and nano regime, the integration of fluid and light in the same path +offers the capacity to reconfigure the device in accordance with the +choice of fluid opted as the fluid medium and thus providing +dynamic and powerful practical tuning mechanism, making it +customizable in real time [2, 3]. +Nonetheless, the fabrication and characterization process are +complicated owing to the miniscule dimensions of such +microstructures and managing the required smoothness at the +edges of microchannel and waveguide wall. High precision handling +of chip is also a must to minimize optical losses and for accurate +control over light and fluid in the micro/nano regime to maintain +good functionality. In the liquid core/air cladding waveguide chip, +the refractive index of core material has to be higher than that of +the cladding so as to enable total internal reflection (TIR) +phenomenon for the refractive index guided mode. Moreover, dye +solutions with different host materials and concentrations have +broad range variation in refractive index to that of water. Such an +enhanced range helps in sustaining the liquid core-air waveguide +over the long flow path for a much higher operational time. This +feature provides for a substantial increase in wider applications of +mode for such type of optofluidic chip. +Optofluidic waveguides can confine light in small dimensions and +generate high intensity optical beam over a long distance, creating +a potential for tremendous applications in the field of +environmental monitoring, bio-sensing, analytical chemistry etc. [4]. + +Various methods have been proposed to fabricate 2D structures; +among them, structure fabrication using soft lithography process is +widely prevalent [5,6]. But the soft lithography process in itself have +a number of disadvantages like involvement of multiple fabrication +steps, high rate of errors while achieving required depth of +microstructures, longer time of fabrication etc. Most noticeable +drawback of soft lithography is that it requires another lithography +method such as photolithography or e-beam lithography to +fabricate the stamp master used in further development process of +microstructure [6]. On the other hand, Femtosecond laser based +direct writing has many advantages over other conventional +methods such Excimer laser writing, CO2 laser writing-beam +lithography and soft lithography etc.[6,7] for fabrication of +microstructures. Femtosecond laser interaction with soft materials +has opened up a new field of waveguide fabrication methods for +structures on the surface as well as inside of transparent materials. +A femtosecond laser emits pulsed beams with durations of tens or +hundreds of femtosecond region which, nowadays, are used for +high-quality micro and nanofabrication. As the energy deposition +time of femtosecond laser is shorter than time required to release +the energy in the form of heat using electron-photon coupling +process, heat affected zone is completely suppressed during the +laser pulse interaction even with soft material like PDMS [7]. This +feature enables laser processing on PDMS with high precision and +resolution. Another advantage of femtosecond laser processing +over conventional methods is the capability of sculpturing complex +shapes at micro and nanoscale in transparent materials. With the +help of focused fs-laser beam one can achieve extremely high peak +intensity in the focused region which provides for high precision in +setting up interaction region at the surface or even inside the +volume. This feature not only eliminates a complicated and multiple +patterning processing, involved in the conventional methods like +photolithography for 2D fabrication, but also makes it feasible to + + + + + + + +create complex 2D structures which were not easily achievable by +other conventional methods. The application of femtosecond +micromachining to develop the optofluidic devices improves their +structural and optical qualities to such an extent that it could +provide a major alternate platform to innovate and produce novel +optical devices on mass production level. Hence, this unique +technique is going to contribute as a promising tool in the photonics +fields and will help in emergence of new businesses once it reaches +commercialization. +In this paper, we have demonstrated the fabrication of micro +structures by using femtosecond direct writing along with +development of liquid core-based waveguide. Structuring of 2D +micro channels on the surface of PDMS is fabricated by f-s laser. +These microchannels are converted to a super hydrophobic nature +which can provide for an effective wave guiding. For light flow path, +R6G and RH101 dye solutions were selected as liquid core medium. +These dyes are distributed evenly along the length of the two +prototypes that we have fabricated as two microchannels. +Concentration of dye solution is chosen in such a way that +refractive index of liquid medium is slightly higher than that of +PDMS and air so that the PDMS and air ends up acting as a clad. +Cross sections of these waveguide systems were captured by a CCD +camera. Role of incident power, concentration of liquid dye and +photo bleaching have been successfully studied thereof. +Experimental Details +Femtosecond laser micromachining process has been used to +fabricate two distinct dimensioned microstructures, each on a +separate PDMS surfaces with a provision of inlet and outlet at the +terminal ends for flow of liquid across the microchannel. These +microchannel act as two unique liquid core/air clad waveguides. Fig. +1 shows the schematics of experimental set up for femtosecond +laser-based micromachining system. The proposed experiment +consists of regenerative Ti: Sapphire based amplified laser system +(CLRK-MXR, USA) capable of delivering a maximum output power of +800 mW with pulse width of 120 fs having central wavelength of +775 nm and repetition rate of 1 KHz. + + + + + + + + + + + + + + +Fig. 1: Femtosecond micromachining fabrication setup for 2D +Microstructures/hallow waveguide structure on PDMS +The output beam from fs-laser system is focused on surface of +PDMS sample using 10X objective lens and beam aligning system +(OPTEC Belgium). All the microstructures are created by successive +translator movements of PDMS sample mounted on micro-position +stage without any movement of focused laser beam. The PDMS +substrate is irradiated with focused laser beam. The key steps in the +experiment includes focusing lens and micro-position translation +stage with 1 um resolution as shown in Fig 1. The focusing objective +lenses are used to converge the laser beam providing a greater +depth of field and smaller spot size as per the calculated +requirement which is important for precision laser micro-machining +process. Micro-position stage is used to move the sample as per the +designed program. The computer-controlled laser power and +micromachining system ensures that position errors and beam +distortions are minimized over the entire scan region. + + + + + + + + + + +Fig. 2: Schematics of: a. Waveguide-I cross section; b. Waveguide- +II cross section +For this experimental study, two straight microchannels on +separate surfaces of PDMS have been fabricated successfully with +different focusing lens. Both the microchannels are fabricated with +different lasing power and focusing lenses. First microstructure +(larger microchannel) is fabricated with a width of 110 µm and a +depth of 110 µm and the second microstructure (smaller +microchannel) with a width of 14 um and a depth of 27.937 µm as +shown in Fig. 2. The larger microchannel has been fabricated by +setting the laser power at 25 mW with a spot size of 15 µm (writing +speed was kept 1mm/sec) and using multi-pass laser scan over the +square shaped cross section. Based on multimode waveguide, the +target cross-section is scanned 10 times horizontally and 5 times +vertically with a beam overlap of 10 µm. Fabrication of inlet and +outlet has also been done by fs-laser using multi-pass laser scan. +The smaller microchannel (waveguide I) as well, has been fabricated +with multi-pass laser scan but with slightly different writing +parameters. Here laser power was taken as 18 mW with a beam +spot size of 8 µm and horizontal scanning was done only twice with +a beam overlap of 6 µm (writing speed 1mm/sec). After the +measurement width of channel was found to be 14 µm and depth +was 27.937 µm. In order to flow the dye solutions through +fabricated channels, uniform inlet and outlet connected to central +microchannels have also been fabricated with a multi-pass and +multi scan using fs-laser. Inlet as well as outlet for bigger +microchannels measure 110 µm in width and 40 µm in depth and +for smaller microchannel width was 110 µm and depth was 20 +microns. In both the cases we have kept the depth of inlet and +outlet less than the central microchannel, for easy flow of liquid in +to it from. + +Femtosecond Laser +M1 +M2 +BS +CCD +Objective +Lens +Sample +Comp. Controlled +Translation Stage + + +PDMS +Air +PDMS +Dye +14 μm +27 μm + + +PDMS +PDMS +Air +Dye +127 μm +127 μm + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +The corresponding width and depth of developed microstructures +have been confirmed by image analysis obtained with confocal +microscope (Olympus LEXT OLS 4000) as shown in Fig. 3 above. This +system capable of resolution up to 10 nm in Z direction and 120 nm +in X-Y plane. The super hydrophobic channels are effective in +creating air cladding between the dye filled liquid core and solid +walls of PDMS, thus providing a good coupling for TIR and the +waveguide. Here, due to 2D wave guiding, scattering and diffraction +of visible light still persists to the channel walls. Light undergoes TIR +at the front end of the channel too. Due to femtosecond structuring +on the PDMS material, the PDMS channel wall is also made + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +hydrophobic which controls the losses of waveguide. After +measuring the contact angle for femtosecond direct-written 2D +microchannel as shown in Fig. 4, the hydrophobicity was checked +for the contact surface modified due to exposure of femtosecond +laser with similar parameters that one used to fabricate +microstructures on PDMS respectively. It was found that channel +has been converted into a hydrophobic channel. These hydrophobic +channels have low solid fraction that can effectively support the +liquid-core/air cladding waveguide configuration on lab-on-chip +platform. Hence, this unique structure allows an effective control +and flow of light from one end to other. + + (a) + +(c) + +(d) + + + +(a) +(b) +Microchannel (b) +Fig. 3: (a) 2D waveguide structure-I over PDMS, (b) Cross section of Waveguide structure-I (c) 2D microstructure-II +over PDMS (D) Cross section of microstructure-II +Fig. 4: Contact angle measurement for (a) Plane PDMS surface and (b) for PDMS surface exposed with femtosecond +laser + + + + +Obg +Inlet +480 +320 +160 +160 +320 +480 +640 +Microchannel112.3 +112.332 +64 +42 +/21 +96 +128 +96 +64 +128 +32Inlet +08 +320 +160 +320 +480 +640 +160 +Microchannel320 +480 +640 +0 +160 +Microchannel + + + +Implementation of microstructure as an optical +waveguide +The two fabricated microchannels, with 2D square and rectangle +shape cross section respectively are filled with liquid dye medium in +order to convert it into liquid based multimode waveguide +microstructures. The structures act as liquid-core waveguide +platform when the refractive index (n) of cladding material +(PDMS/air) is smaller than that of the flowing dye solution which +acts as the core and enable the total internal reflection for the +configuration of the index-guided mode [8, 9] +The waveguide losses are also sensitive to the roughness of the +surfaces of the waveguide walls. As the waveguide walls are pretty +smooth in case of femtosecond fabrication, the losses are very +much minimized in comparison to other conventional fabrication +methods. Other challenges and issues in these experiments are also +resolved as gas (i.e., air) is used as cladding material [9, 10]. Air has +a much lower refractive index (nair=1.0) than most of the solid and +liquid materials, thus it allows a wider range of incident angles. Air +also has much lower viscosity than that of any liquid so that it can +significantly reduce the hydrodynamic friction and Joule heating at +the interface between the core and the cladding [10]. Higher +refractive index difference between the liquid core and air cladding +(Δn= 0.407) helps to increase the amount of light trapped inside the +core and avoids the diffusional mixing problem normally observed +in liquid to liquid L2 waveguide. + + + + + + + + + + + + + + + + + + + + + + + + + +In presented case, two types of dyes have been used as the gain +material to demonstrate the concept of liquid-air waveguide on a +chip. First dye is Rhodamine-6G dissolved in ethanol and benzyl +alcohol while the second one is Rhodamine-101 dissolved in +mixture of ethanol + benzyl alcohol in a concentration range of +1mM to 5mM for both liquid core solutions. The corresponding +change of refractive index of fluid observed by varying the dye +solution concentration for both dye solutions is measured by the +refractometer (Abbemat 500). The refractive index difference of +core and clad has been selected between 10-3 to 10-2 for index for +varying concentration form of R6G and Rh101 from 1% to 10 %. +From measurement, it is evident that dye solutions with different +concentration can act as two different liquid core medium with +varying characteristics. For example, in case of 1mM concentration +Rh-6G dye solution (n2=1.4030) in mix solution of (ethanol + benzyl +alcohol) is higher than that of cladding material i.e. air (n1= 1) and +PDMS (n3= 1.40). Liquid filled channel acts as a core in this case +wherein light propagates through liquid core waveguide by +satisfying condition of total internal reflection. This has been +demonstrated through the resulting fluorescence emerging at the +other end of the waveguide. Characteristics are found to be +drastically different between the gain materials as they are +confined to the liquid-air interface. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +(e) + +(f) +Fig. 5: Ray-tracing simulation using FRED for two liquid waveguide structures looking from the top down. In both cases, core (liquid dye +solution) indicated with the lightly shaded region which is embedded in the darker cladding region. a. for multimode at liquid-air +interface with 110 micron width (Waveguide II); b. for multimode at liquid-PDMS interface (Waveguide II); c. for multimode at liquid-air +interface 14 micron width (Waveguide II) and d. for multimode at liquid-PDMS interface (Waveguide II) ; e. Mode field distribution in case +of liquid air interface for waveguide I; f. Mode field distribution in case of liquid air interface for waveguide II + + + +(a) +(b)(c) +(d)(mt +100 +Ax/s +X Axis +(uw) +Local +0.3 +3 +once +0.2 +2 +n +0 +0 +Local +X Axis +1 +Local(mm +2 +0.02 +02 +Axis +00 +(mm) +0.020.00 +ueal +12 +200 +100 +100 +0.00 +0.024 +Axis +.00 +02 +Axis + + +Characterization +For any waveguide structure, there is a range of ray angle that will +fulfill the total internal reflection condition based on relative +refractive index difference between the core and clad region. In this +case, dye solutions with different concentration act as the core +medium and PDMS/air act as clad. The number of TIR for light is +inversely proportional to the diameter or cross-section of +microchannel. Ray tracing simulation platform (FRED) is used to +understand the propagation of fluorescence light 532 nm through +dye filled microstructure. Optical losses at the liquid-air interface +and liquid-PDMS interface in case of multimode and single mode +microstructure respectively is obtained as shown in Fig. 5. To +illustrate this, Fig. 5 shows a ray-trace simulation of a liquid core +waveguides. Gaussian beam from a coherent laser source is coupled +at the one end of waveguide with the help of 10X objective lens for +both structures. The laser light source is illuminated at the normal +incidence of the waveguide. Dye solution is filled inside the +microstructure. Above simulation has been applied by considering +the liquid dye with R.I. of 1.4030 as the core medium embedded +inside PDMS with R.I. of 1.40 and air with R.I. of 1 as the substrate. +Outside the core, lower clad being PDMS (1.40) and upper clad +being (Air =1), lower index region is formed. +The result obtained for different cases, shows that light can be +coupled inside the microstructure filled with 1mM concentrated +dye solution and confirms its waveguide nature. It also clears from +this study that optical losses at liquid-air interface is comparatively +less than that of liquid-PDMS interface irrespective of the +dimensions of waveguide. However, dimensions of waveguide +affect the total internal reflection per unit length. It is observed that +waveguide structure with smaller diameter is more suitable to act +as liquid mode guiding structure leading to increased probability of +guiding more number of photons to reach the output end. +These results confirm that laser light can propagate through 2D +liquid core waveguide structure by satisfying condition of total +internal reflection over the interface of liquid core and PDMS/Air +clad. By above observations, it becomes clear that many +complications and challenges can be easily overcome for +propagating index guided mode when air is used as a cladding +material. +In this experiment, we have filled the dye solution mix of ethanol +and benzyl alcohol into two microchannels (15 mm length each), +with 110 micron and 14 micron width respectively, on PDMS chip. +The end fire coupling method is used for optical characterization of +the developed liquid waveguide structures. The schematic of +characterization set up is as shown in Fig. 6 above. Here, the light +from Nd:YAG laser is end coupled into waveguide I and waveguide II +by using objective lens and assembly of optics is also shown in Fig. +6. The roughness of PDMS wall for 2D microchannel for both +waveguide I and II were approximately limited to 1 micrometer due +to the better quality of direct writing of femtosecond laser. To +characterize the chip, we have used a micro syringe to insert the +liquid dyes into the microchannels as the core medium. The +required liquid dyes for core medium are obtained by using ethanol ++ benzyl alcohol as the host solution with two different solutes Rh- +6G and Rh-101 to form two different dyes. Respective mixtures of +these two solutes in varying concentrations act as liquid cores +within the two microstructures. + + + + + + + + + + + + +Fig. 6: Characterization setup for liquid core /Air Cladding +waveguiding + +As the absorption spectra of Rh-6G and Rh-101 lies in visible +wavelength therefore we have selected the Nd:YAG laser with 4 +mW power and 7 nsec pulse duration with rep rate 10 Hz as the +pump source. This Nd: YAG laser is used to excite the fluorescent +dye molecules dissolved in the liquid core. The source is aligned to +beam iris and 10X objective lens. Across the objective lens beam +spot size is reduced to~100 µm for waveguide II structure and 10 +micron for waveguide I structure. As the light and liquid are +pumped simultaneously to the microchannel, due to high refractive +index difference between liquid core and air, the fluorescence light +is guided and captured at the other end of microchannel. The outlet +end is connected to optical spectrometer. Fluorescence spectrums +are measured by changing the laser power and concentration of +dyes. +Model cross-sectional analysis for waveguide structures: In these +two structures as shown in Fig. 3 and 7, first one is multimode +waveguide II structure that allows multimodal tuning of waveguides +from liquid core and other one waveguide 1 support few modes +propagation. +To separate the fluorescence signal and excitation light, we need +‘Spectroscopic analysis and it is quite a difficult job to separate +these two outputs over the output end of channel. The intensity +profile for fluorescent light generated and propagated through the +developed liquid waveguide structures have been measured using +‘near-field intensity profile measurement’ experimental set up as +shown in Fig. 6 above. + The output profiles for both waveguide structures have been +captured using CCD equipped with band-pass light filter for pump +light (λ=532 nm). Intensity at the output end of liquid waveguide +structure and corresponding intensity profile is shown in figure 7. +Profile measurements make it clear that the fabricated +microstructures are supporting the index guided modes for the +propagation and can be used as a waveguide like structure for +various applications. The small beam size (~100 µm) of the input +beam, relative to that of the liquid core (100 µm), helps in reducing +the coupling losses of pump light at the cross-section of the +microchannel. Increment in the coupling and propagation losses +are due to the increasing effects of the scattering and diffraction of +the visible light through the PDMS channel walls (i.e., air/dye +solution/PDMS interfaces at the front and the end) with a normal +incident angle. + + + + + +LASER +M1 +M2 +10xObikns +OSA +10XObjlens +BeamPellicke +Inket +Oullet +PDMS + + + + + + + + + + + + + + + + + + + + + +Fig. 7:Intensity distribution for light propagating through: (a) Waveguide I and (b) multimode Waveguide II liquid core/air +clad cross section + + + + + + + + + + + + + + + + + +Fig. 8: Comparative studies of emission spectra for Waveguide I, Waveguide II structure and cuvette for (a)Rh-6G and (b) Rh-101 dye +solution + + +Results and discussion +In order to confirm waveguide nature of dye filled 2-D +microstructures, we have studied the fluorescence spectroscopy for +3mM concentration of dye (Rh-6G) as a liquid medium in three +different configurations i.e.) quartz Cuvette, b) waveguide II +structure and c) waveguide I structure. +The fluorescence emission spectra are collected for three different +structures in order to obtain the effect of microstructure +dimensions on the emission output. It is observed that emission +spectral +peak +wavelength +is +changed +by +15 +nm +from +microstructures to cuvette filled with same dye solution Rh-101 and +pumped to a uniform Nd: YAG laser at 4 mW power as shown in Fig. +8.b. Similar shift has been observed in case of Rh-6G which shown +in Fig. 8.a. Increase in output photon density confirms the coupling +of FL inside the waveguide structure. It is also clear from the above +fig that FWHM of FL spectra gets narrower from Cuvette to +waveguide structure I. The spectral narrowing effect is observed +due to the Fabry-Perot resonator formed by dye solution filled + +liquid waveguide and solvent-air interfaces. This result confirms +that fluorescence light generated by dye solutions gets coupled +through microchannel and forms Fabry-Perot type oscillations +which lead us to the conclusion that 2D structure fabricated on the +surface of PDMS functions as a liquid core/air cladding waveguide +structure. In addition, consideration of these two waveguides and +quartz cuvette confirms that dynamics of fluorescence spectra also +changes. The intensity, lasing peak and line width change according +to dimensions of individual structure. Same results are observed for +Rh-101 dye solution. FWHM of fluorescence signal of quartz +Cuvette is observed 48.8 nm and peak wavelength at 637.59 nm. +In multimode waveguide II for Rh-101 dye, line width achieved is +13.53 nm, peak wavelength is 624.10 nm and that for waveguide I +structure line width is 6.94 nm and peak wavelength is 623.75nm. +In case of Rh-6Gdye solution, FWHM for Cuvette is 42.89 nm and +peak wavelength is 580.90 nm. For multimode waveguide II +structure is 14.52 nm, peak wavelength is 573 nm and for + + + + + + +Waveguide + +Waveguide II +(a) +(b) +Fluorescence +emission on +cross section +Pump +Pump +Fluorescence emission on +cross section + +Rh-6G +(a) +Cuvette +Waveguide II +Waveguide I + +Rh-101 +(b) + Cuvette + Waveguide II + Waveguide I + + + +1.0 +0.9 +Intensity +0.8 +Normalized +0.7 +0.6 +0.5 +0.4 +0 +5 +10 +15 +20 +25 +30 +35 +WaveguideDepth (um)70000 +Cuvette +60000 +Multimode +.U. +Single mode +A +50000 +Count +40000 +30000 +20000 +Ph +10000 +0 +540 +560 +580 +600 +620 +640 +Wavelength(nm)70000 +Cuvette +60000 +Multimode +3 +Singlemode +A +50000 +Count +40000 +hoton +30000 +20000 +P +10000 +0 +580600620640660680700720740 +Wavelength(nm)Intensity +0.8 +Normalized +0.6 +0.4 +0.2 +0.0 +0 +20 +40 +60 +80 +100 +120 +WaveguideDepth (um) + + +waveguide I structure linewidth reduces to 5.34 nm, peak +wavelength is shifted at 573.70 nm. Through a comparison study, it +has been observed that peak wavelength in multimode waveguide I +and II structure is quite less (blue shifted) compare to Cuvette +output. In the present study, we can see for quartz Cuvette the +output florescence spectrum has a large bandwidth. Due to the +small dimensions of microchannels, the obtained graph clearly +indicates that the linewidth of waveguide structure II is less than +that of Cuvette and waveguide structure I have even lower line +width compared to the structure II. +Effect of power for higher concentration regime +For characterization of these FS written microchannels in terms of +multimode waveguide microstructures I and II, we have studied the +effect of pump power for Rh-6G and Rh-101 dye solutions. It has +been observed that with variation in the pump power, a significant +tunability has been observed in fluorescence spectra. All these +measurements have been observed at the room temperature. Fig.9 +illustrates the measured emission spectra with Rh-6G for 10 mM in +both liquid core/air waveguide structure I and II. Here we have +varied the input power in a range of 4 - 12 mW for both cases and +observed that for lower concentrations, insignificant change was +observed in the fluorescence peak wavelength in correspondence +to the variation in incident laser power but for 10 mM, peak +wavelength shift has been observed as power is varied. A +florescence peak wavelength count emerges as optical pumping +power density is increased. The absorption of incident laser beam is +responsible for change in refractive index gradient of dye solution +of the order of 10-3 to 10-4 due to optically heated thermal lensing +effect [11]. Also, incident pulsed high-power laser beam generates +acoustic pressure waves inside the dye filled liquid waveguide +structure which induce the variation in the refractive index of +medium [11, 12]. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +In this way, incident laser power plays significant role in the shift of +florescent peak wavelength and output spectrum which is reflected +in the experimental results as shown in the Fig. 9 respectively. In +low concentration regime, isolated dye molecules are present but +as we increase the concentration of dye, the spacing between dye +molecules decreases and aggregates are formed. +Thus, peak wavelength variation can be seen in very high +concentration regime. The other phenomenon which contributes to +the modified output spectra of dye is ‘self-absorption’ due to higher +concentrations. As the molecular dimmer are formed at high +concentration, it explains the appearance of a second shift in +measured fluorescence spectroscopy such that red shift is observed +for 10 mM dye concentration by varying the power from 4 mW to +12 mW. From Fig. 9, we can clearly observe the peak wavelength +for multimode waveguide structure II for Rh6G solution was +achieved at 579.8 nm at 4mW pump power. As the power increases +to 6mW, peak wavelength is shifted at 581.42 nm. By varying to +higher power, red shifted peak wavelength is reached up to 583.25 +nm. Same experiment has been repeated for Rh-101 dye solution. +We took 10 mM solution and measured the fluorescence spectra +for multimode waveguide structure II at 2 mW, the peak +wavelength is captured at 626.48 nm. The amount of light guided +inside the multimode mode waveguides I and II are strongly +dependent on the refractive index difference between ncore and nclad +as: +∆������������ = ������������������������������������������������ − ������������������������������������������������ +The Rh-6G and Rh-101 are dissolved in mixture of ethanol +benzyl +alcohol as a host solution. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Rh-6G Waveguide I + + Rh-6G Waveguide II + + Rh-101 Waveguide I + +Rh-101 Waveguide II +(c) +(d) +(b) +(a) +Fig. 9: Study of pump power-based fluorescence emission spectra for(a) Rh6G in waveguide structure I (b) Rh6G in +multimode waveguide structure II (c) Rh101 in waveguide structure I and (d) Rh101 in waveguide structure II +(A=4mW, B=6 mW, C=8mW, D=10mW E=12mw) + + + +70000 +A +B +60000 +c +3 +3 +50000 +E +Counts +40000 +Photon +30000 +20000 +10000 +0 +560 +570 +580 +590 +600 +610 +620 +630 +Wavelength (nm)70000 +60000 +(A.U.) +50000 +Counts +40000 +Photon +30000 +20000 +10000 +0 - +580 +600 +620 +640 +660 +680 +Wavelength (nm)70000 +60000 +(A.U.) +50000 +Counts +40000 +Photon +30000 +20000 +10000 +0 +550 +560 +570 +580 +590 +600 +610 +Wavelength (nm)70000 +60000 +(wu) +50000 +Counts +40000 +Photon +30000 +20000 +10000 +0 +580 +600 +620 +640 +660 +680 +Wavelength (nm) + + + +The experiment is repeated for waveguide structure I for same +solution. Light is coupled from the output end of waveguide. Light +after being guided inside the waveguide structure I is observed at +the cross section of the waveguide and we can observe in the graph +that the peak wavelength and line width for same power and +concentration changes. There is slight change in Peak wavelength +but line width drastically changes in the waveguide I compared to +the waveguide II. +For waveguide structure I, the red shift in to fluorescence emission +peak for both dyes caused by the variation pump power from 4 mW +to 12 mW with step size of 2 mW. The corresponding tunability +achieved is in the range of 579.87-583.25 nm and average line +width is 6.8 nm in case of Rh-6G. For the waveguide structure I, in +case of Rh-101 based active solution, tunability achieved is 7 nm +and observed average line width is 6 nm. For Rh-101 dye in +waveguide structure I tunability achieved is in the range of 620.49- +628.44 nm. In the case of waveguide structure II, for Rh-6G, red +shift in peak wavelength has been observed. Tunability of peak +wavelength being 4 nm and average line width being 10 nm. In the +case of Rh-101, the tunability of 6 nm is achieved. For multimode +waveguide structure II in case of Rh-101, spectral tunability is +achieved in the range of 626.48 - 632.50 nm and average line width +is 10 nm. In case of Rh-6G, for same multimode waveguide II, +spectral tunability is achieved in the range of 579.87-583.25 nm and +average FWHM line width is 9.5 nm. + +Effect of concentration +The tunability in output band of liquid filled microstructures is +mainly determined by selection of dye solution and its solubility +limit to highly dilute systems of Rh-6G and Rh-101. In case of lower +concentration regime (concentrations of 0.1 mM), component of +self-absorption is quite significant which decrease the intensity of +signal. In addition, at higher concentrations regime (concentrations +of 10 mM), intermolecular self-quenching rapidly decreased the +output intensity [11]. Particularly, in high concentration regime, the +Rh-6G and Rh-101 molecules arrange themselves into H type and J +type dimmers [14, 15 & 16]. This dimmer formation changes the +electronic structure and as a result, the output emission spectrum is +also changed. In this way, the variation in the concentration of +liquid medium provides an optical flexibility for liquid waveguide +structures. +The experimental observation for spectral dependency of liquid +waveguide structures for varying concentration of Rh-6G and Rh- +101 dye solution at fixed pump power is as shown in below Fig. +(10). The detailed analysis of output spectra for Rh-6G +concentrations ranging from 1 mM to 4 mM and Rh-101 +concentrations ranging from 1 mM to 5 mM have been done. + +It was observed that the spectral position of the propagating mode +through the liquid waveguide structure shifts toward longer +wavelengths by increasing the concentration of dye solution. In +case of waveguide I filled Rh-6G solution, the peak wavelength shift +observed from 573.16 nm to 580.67nm for 1mM to 4 mM +concentration change. Along with peak wavelength, average line +width shift is also observed from to 5 -6.01 nm for the same. For Rh- +101 filled waveguide I , 5nm shift in peak wavelength and ± 2 nm +sift in line width is observed when concentration changes from 1 +mM to 5 mM respectively. As Fig. 10 shows, the wavelength of the +peak maximum is red-shifted with varying concentration. The same +experiments were carried out for multimode waveguide structure II +for both dye solutions. Similarly, spectral study for different +concentrations in multimode structure II for Rh-6G dye, 8 nm red +shift in peak wavelength and 1.5 nm shift in linewidth have been +observed while 5 nm peak wavelength red shift with ± 2 nm +linewidth shift has been observed for Rh-101 respectively. Here, the +peaks occurred at different wavelengths as per the changing +concentration of liquid medium. Red shift in the output spectra is +observed when concentration is increased from 1 mM to 4 mM. The +apparent red shift in the emitted intensity signal is due to the small +Stokes shift of Rh-6G and the large spectral overlap in absorption +and emission [13, 14]. Same observations have been seen for Rh- +101 solution. The optimum optical absorption of pump beam, inside +the dye solution filled microchannel is achieved at concentration of +1 mM. + +Photo bleaching effect in microstructure + The rate of photo bleaching primarily depends upon the type of +dye, host material and their optical properties. Additionally, +illumined intensity of source, wavelength of source, exposure time +and temperature also affect the extent of photo bleaching [16, 17]. +Photo bleaching is not a desirable phenomenon for lab-on chip +based optofluidic waveguides and optofluidic lasers. It destructs the +continuous output of miniaturized device and limits its usage to +short time periods only. Here, we have studied the photo bleaching +effect in waveguide structure I and II for both Rh-6G and Rh-101 +dye mediums. This study helps us to design and improve upon the +functionalities of optofluidic chips. +As a consequence of photo bleaching due to the long exposure of +pump intensity to the liquid active medium, the fluorophores lose +the ability to emit fluorescence in the same magnitude of intensity. +The linewidth and intensity of florescence output have been +significantly changed due to photo bleaching effect in the liquid +waveguide. Due to diffusion dynamics in the presence of on chip +reservoirs, in case of micro dye lasers, the supply of unbleached dye +solution on faster time scale is not required. In the studied case, +length of microchannel is 15 mm and width is 110 micron (W/L= +0.0073) for waveguide structure II and for waveguide I (W/L= +0.00093). For both the cases longitudinal coupling of light have +been done in slit area. Photo bleaching time for waveguides can be +converted to just a few minutes without using any costly liquid +handling devices and replacement. Here, we have used the static +phenomenon of liquid waveguides without using the external fluidic +handling systems such as syringe pumps. The experimentally +observed fluorescence dynamics is in qualitative agreement with +the bleaching-diffusion dynamics [17, 18 & 19]. + +In Microsystems, photo bleaching creates unwanted intensity +changes in the output. The quantum yield of photo bleaching and +the molar extinction coefficient are inherent properties of +Rhodamine-6G and Rhodamine-101. In case of static measurements +inside microstructures, most affricating factors for photo bleaching +inside waveguide structures filled with diluted solutions can be +determined by applying Beer’s law as [14]: + +������������������������������������������������ = ������������������������������������������������������������������������ (−������������������������������������������������������������������������������������������������������������������������������������) + +Aout is the amount of emitting molecules remaining after photo +bleaching; Ain is the original concentration of absorbed dye + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Rh-6G waveguide I + +Rh101 waveguide I + +Rh101waveguide II + +Rh-6G (waveguide I) +(b) + +(c) +Rh-101 (waveguide II) + +Rh-101 (waveguide I) +(d) + +(a) +Rh-6G (waveguide II) +(c) +(d) + +Rh-6G waveguide II +(b) +(a) +Fig. 10: Studies of concentration variation-based fluorescence emission spectra for multimode waveguide structures I and II +for Rh6G and Rh101: (a) Rh6G filled structure I (b) Rh6G filled multimode structure II (c) Rh101 filled structure I (d) Rh101 +filled multimode structure II. +Fig. 11: Photobleaching studies for (a) Rh6G in multimode structure II (b) Rh6G in structure I (c) Rh101 in multimode structure II +and (d) Rh101 in waveguide structure I + + + +70000 +1 mM +(n'v) +60000 +2 mM +3 mM +50000 +4mM +Counts +40000 +30000 +Photon +20000 +10000 +0 +560 +580 +600 +620 +640 +Wavelength (nm)70000 +60000 +o S +A.U. +10 S +50000 +20 S +Count +30 S +40 S +40000 +60 S +70 S +Photon +30000 +20000 +10000 +0 +550 +560 +570 +580 +590 +600 +610 +Wavelength (nm)70000 +(A.U.) +oS +60000 +25 S +50 S +50000 +75 S +Counts +100 S +40000 +125 S +150 S +hoton +175 S +30000 +200 S +225 S +P +20000 +10000 +580 +600 +620 +640 +660 +680 +700 +Wavelength (nm)70000 +10 S +60000 +20 S +30 S +(A.) +50000 +40 S +Counts +50 S +60 S +40000 +70 S +80 S +hoton +30000 +90 S +20000 +10000 +0 +580 +600 +620 +640 +660 +680 +Wavelength (nm)70000 +os +60000 +20 S +(A.U.) +40 S +50000 +60 S +Count +80 S +40000 +100 S +120 S +Photon +30000 +140 S +160 S +20000 +180 S +10000 +0 +560 +580 +600 +620 +Wavelength(nm)70000 +1mM +60000 +2 mM +3 mM +50000 +4mM +Counts +40000 +Photon +30000 +20000 +10000 +0 +560 +580 +600 +620 +640 +Wavelength (nm)70000 +1 mM +60000 +2 mM +(A.U) +3 mM +50000 +4mM +Count +5 mM +40000 +30000 +20000 +10000 +0 +600 +610 +620 +630 +640 +650 +660 +Wavelength (nm)70000 +1 mM +2 mM +(A.U +60000 +3 mM +4 mM +Counts +50000 +5 mM +40000 +noton +30000 +20000 +10000 +0 +600 +610 +620 +630 +640 +650 +Wavelength(nm) + + + +molecules, I0 is the incident light irradiance, Qph is the quantum +yield of photo bleaching and te is the exposure time. From the +above equation, it is clear that quantity of photo bleached +molecules inside the solution is exponentially dependent on +exposure time and pump intensity. Therefore, even a small increase +in time or light intensity results in a substantial increase in the +amount of photo bleaching. Our experimental results reveal that +these optofluidic waveguides can be operated over a few minutes +without needing a flow of fresh dye solution as shown in Fig. (11). In +case of Rh-6G solution, photo bleaching time is observed to be 70 +sec for waveguide I and 180 sec for multimode waveguide structure +II, while in case of Rh-101, photo bleaching time is observed to be +90 sec and 225 sec for multimode waveguide structure I and II +respectively. +This experiment confirms that decay time of Rh-101 is slightly +greater than that of Rh-6G. This observed behavior is justified by +previous publication [17, 20]. The photo bleaching time can further +be improved by a factor of 3 to 4 times by adding reservoirs on chip. +Also, by converting the fabricated 2D structures into a 3D chip and +using different pumping scheme, the developed liquid waveguide +structure can be used in established optofluidic devices with +enough output which would be sufficient and even more than is +required to do the lab-on-chip experiments + +Conclusion: +In conclusion, we have demonstrated a novel femtosecond +fabricated liquid-core/air-clad waveguide microstructures on a +PDMS microchip. We have studied the role of concentration, photo- +bleaching and incident power on the output of waveguides in detail. +This work gives a very good understanding towards the interaction +of light and fluid in micro dimension. Tunability in the form of +intensity, wavelength and linewidth has been successfully obtained. +The characteristic of these waveguide sources can be easily +controlled and modulated by adjusting the fluid properties of the +core medium. After converting these 2D chips into 3D chips and +adding some optical component to the same, the liquid waveguide +source can be made into a tunable optofluidic laser having a +coherent light source that can be integrated with multifunctional +lab-on chip systems. In this way, fluorescence measurement and +detection by optofluidic devices can provide a powerful platform +for analysis of biological systems and aid significantly in medical +diagnostics and chemical detection. This research gives a brief idea +about development and maintenance of highly functional lab-on- +chip waveguides which can be used out of laboratory also for many +applications. + +Acknowledgement: +We acknowledge the support provided by CMTI Bangalore, India for +femtosecond micromachining fabrication facility. + +Reference: +1. +B. Helbo, A. Kristensen, and A. Menon, “A micro-cavity +fluidic dye laser,” J. Micromech. Microeng, 2003, +13(2),307–311. +2. + D. Psaltis, S. R. Quake, and C. Yang, “Developing +optofluidic technology through the fusion of microfluidics +and optics,” Nature, 2006, 442(7101), 381–386. +3. + Z. Li and D. Psaltis, “Optofluidic dye lasers,” Microfluid. +Nanofluidics 2008, 4 (1-2), 145–158. +4. +Lin Pang,* H. Matthew Chen et al., “Optofluidic devices +and applications in photonics, sensing and imaging” Lab +on a Chip, 2012, 12, 3543–3551. +5. +D. A. Chang-Yen, R. K. Eich, and B. K. Gale, “A monolithic +PDMS waveguide system fabricated using soft-lithography +techniques,” J. Lightwave Technol., 2005, 23(6), 2088– +2093. +6. +Prashanth Reddy Konari et al.,“Experimental Analysis of +Laser Micromachining of Microchannels in Common +Microfluidic Substrates” Micromachines, 2021, 12, 138. +7. +Felix Sima, Koji Sugioka et al.,“Three-dimensional +femtosecond +laser +processing +for +lab-on-a-chip +application”, Nanophotonics, 2018; 7(3): 613–634. +8. +Y. Yan et al., “A tunable 3D optofluidic waveguide dye +laser via two centrifugal Dean flow streams”, Lab on a +Chip, 2011, 11, 3182. +9. +Stijn Vandewiele et al., “Single-mode air-clad liquid-core +waveguides on a surface energy patterned substrate”, +OPTICS LETTERS, 2014, Vol. 39, No. 16. +10. PengFe et al.,“A compact optofluidic cytometer with +integrated liquid-core/PDMS-cladding waveguides, Lab +Chip, 2012, 12, 3700–3706. +11. S.k Mishra et.al. “Measurement of Thermo Optical +Coefficient +for +Commonly +used +Dye +Solvents”, +International journal of photonics and optical technology, +2018,Vol. 4, Iss. 2, pp: 12-16. +12. Shane M. Eaton,Carmela De Marco,Rebeca Martinez- +Vazquez,Roberta +Ramponi,Stefano +Turri,Giulio +Cerullo,Roberto +Osellame, +“Femtosecond +laser +microstructuring for polymeric lab-on-chips” Journal of +Biophotonics, 2012, 5(8-9). +13. Penzkofer. W. I.eupacher et al., “ Fluorescence behaviour +of highly concentrate rhodamine 6G solutions”. journal of +Luminescence, 1987,37, 61-72. +14. Florian M. Zehentbauer et al., “Fluorescence spectroscopy +of Rhodamine 6G: Concentration and solvent effects”, +SpectrOChimica Acta Pan A: Molecular and Biomolecular +SpectrOscopy, 2014, 121() 147-151. +15. K. Noack, J. Kiefer, A.I.saipem, et al., “Concentration +dependent hydrogen bonding effects on the dimethyl +sulfoxide vibrational structure in the presence of water, +methanol and ethano”l, ChemPhysChem 2010, 11, 630- +637. +16. VJ. Gavrilenko, MA Noginov, et al., “Ab initio study of +optical properties of Rhodamine 6G molecular dimers”, +Journal of Chemical Physic, 2006s 124, 044301. +17. Morten Gersborg-Hansen et al., “Bleaching and diffusion +dynamics in optofluidic dye lasers”, APPLIED PHYSICS +LETTERS, 2007,90, 143501. +18. Jerker Widengreny et al., “Mechanisms of photobleacing +investigated by fluorescence correlation spectroscopy”, +Bioimaging, 1996, 4, 149–15. +19. Mingyu Chapma et. al., “Rhodamine 6G Structural +Changes in Water/Ethanol Mixed Solvent”, Journal of +Fluorescence, 2018, 28:1431–1437. +20. Julien Laverdant et. al. , “Experimental Determination of +the Fluorescence Quantum Yield of Semiconductor +Nanocrystals”, Materials, 2011, 4, 1182-1193. + + + diff --git a/J9FIT4oBgHgl3EQfZytY/content/tmp_files/load_file.txt b/J9FIT4oBgHgl3EQfZytY/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec51beffd0914398dfbb755e7a9e677cbde9aeed --- /dev/null +++ b/J9FIT4oBgHgl3EQfZytY/content/tmp_files/load_file.txt @@ -0,0 +1,483 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf,len=482 +page_content='Femtosecond Laser Engraved 2D Tunable Optofluidic Liquid Core/Air Cladding Channel Waveguides on PDMS Sanyogita*, Amar Ghar and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Panigrahi Centre for Lasers and Photonics, Indian Institute of Technology, Kanpur-208016 (UP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' sanyogita.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='iitk@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='com We have demonstrated fabrication and characterization of 2D liquid based multimode optical waveguide structures over Polydimethylsiloxane (PDMS) material based chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fabrication of two separate microsturures, one with width of 14 micron and depth of 27 micron while the other with width as well as depth of 110 micron, was achieved by femtosecond laser micromachining process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The dye solution is passed through the microstructure from one end to the other;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' wherein dye solution acts as the core while PDMS and air act as cladding medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The femtosecond laser micromachining parameters are optimized in terms of laser power, pulse width, writing speed, focused beam size etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Quality of fabricated microstructures is confirmed by microscopic analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The confirmation of liquid core/air cladding based waveguide is obtained through the spectral and modal analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The optical analysis has been done by using fluorescence light coupled out from waveguide structures filled with different dye solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These waveguide structures give strong light confinement and intense interaction between dye solution and pump light.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The developed micro structures are tunable in terms of intensity, wavelength and beam size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Such micro structures can be implemented in design and development of lab-on-chip micro lasers and sensing applications in any multifunction lab-on-chip devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Introduction Optofluidic is a great research platform where the advantages of both optics and microfluidics can be combined in a single chip to move towards highly compact, portable and multifunctional devices [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This optofluidic lab-on-a-chip (LOC) approach provides a huge potential in terms of low-cost optical sources, sensors, liquid-liquid waveguide, liquid core waveguide and real time detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Particularly in photonic science, and more specifically in the micro and nano regime, the integration of fluid and light in the same path offers the capacity to reconfigure the device in accordance with the choice of fluid opted as the fluid medium and thus providing dynamic and powerful practical tuning mechanism, making it customizable in real time [2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Nonetheless, the fabrication and characterization process are complicated owing to the miniscule dimensions of such microstructures and managing the required smoothness at the edges of microchannel and waveguide wall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' High precision handling of chip is also a must to minimize optical losses and for accurate control over light and fluid in the micro/nano regime to maintain good functionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In the liquid core/air cladding waveguide chip, the refractive index of core material has to be higher than that of the cladding so as to enable total internal reflection (TIR) phenomenon for the refractive index guided mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Moreover, dye solutions with different host materials and concentrations have broad range variation in refractive index to that of water.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Such an enhanced range helps in sustaining the liquid core-air waveguide over the long flow path for a much higher operational time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This feature provides for a substantial increase in wider applications of mode for such type of optofluidic chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Optofluidic waveguides can confine light in small dimensions and generate high intensity optical beam over a long distance, creating a potential for tremendous applications in the field of environmental monitoring, bio-sensing, analytical chemistry etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Various methods have been proposed to fabricate 2D structures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' among them, structure fabrication using soft lithography process is widely prevalent [5,6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' But the soft lithography process in itself have a number of disadvantages like involvement of multiple fabrication steps, high rate of errors while achieving required depth of microstructures, longer time of fabrication etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Most noticeable drawback of soft lithography is that it requires another lithography method such as photolithography or e-beam lithography to fabricate the stamp master used in further development process of microstructure [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' On the other hand, Femtosecond laser based direct writing has many advantages over other conventional methods such Excimer laser writing, CO2 laser writing-beam lithography and soft lithography etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' [6,7] for fabrication of microstructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Femtosecond laser interaction with soft materials has opened up a new field of waveguide fabrication methods for structures on the surface as well as inside of transparent materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' A femtosecond laser emits pulsed beams with durations of tens or hundreds of femtosecond region which, nowadays, are used for high-quality micro and nanofabrication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As the energy deposition time of femtosecond laser is shorter than time required to release the energy in the form of heat using electron-photon coupling process, heat affected zone is completely suppressed during the laser pulse interaction even with soft material like PDMS [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This feature enables laser processing on PDMS with high precision and resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Another advantage of femtosecond laser processing over conventional methods is the capability of sculpturing complex shapes at micro and nanoscale in transparent materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' With the help of focused fs-laser beam one can achieve extremely high peak intensity in the focused region which provides for high precision in setting up interaction region at the surface or even inside the volume.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This feature not only eliminates a complicated and multiple patterning processing, involved in the conventional methods like photolithography for 2D fabrication, but also makes it feasible to create complex 2D structures which were not easily achievable by other conventional methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The application of femtosecond micromachining to develop the optofluidic devices improves their structural and optical qualities to such an extent that it could provide a major alternate platform to innovate and produce novel optical devices on mass production level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Hence, this unique technique is going to contribute as a promising tool in the photonics fields and will help in emergence of new businesses once it reaches commercialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this paper, we have demonstrated the fabrication of micro structures by using femtosecond direct writing along with development of liquid core-based waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Structuring of 2D micro channels on the surface of PDMS is fabricated by f-s laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These microchannels are converted to a super hydrophobic nature which can provide for an effective wave guiding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For light flow path, R6G and RH101 dye solutions were selected as liquid core medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These dyes are distributed evenly along the length of the two prototypes that we have fabricated as two microchannels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Concentration of dye solution is chosen in such a way that refractive index of liquid medium is slightly higher than that of PDMS and air so that the PDMS and air ends up acting as a clad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Cross sections of these waveguide systems were captured by a CCD camera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Role of incident power, concentration of liquid dye and photo bleaching have been successfully studied thereof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Experimental Details Femtosecond laser micromachining process has been used to fabricate two distinct dimensioned microstructures, each on a separate PDMS surfaces with a provision of inlet and outlet at the terminal ends for flow of liquid across the microchannel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These microchannel act as two unique liquid core/air clad waveguides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 1 shows the schematics of experimental set up for femtosecond laser-based micromachining system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The proposed experiment consists of regenerative Ti: Sapphire based amplified laser system (CLRK-MXR, USA) capable of delivering a maximum output power of 800 mW with pulse width of 120 fs having central wavelength of 775 nm and repetition rate of 1 KHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 1: Femtosecond micromachining fabrication setup for 2D Microstructures/hallow waveguide structure on PDMS The output beam from fs-laser system is focused on surface of PDMS sample using 10X objective lens and beam aligning system (OPTEC Belgium).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' All the microstructures are created by successive translator movements of PDMS sample mounted on micro-position stage without any movement of focused laser beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The PDMS substrate is irradiated with focused laser beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The key steps in the experiment includes focusing lens and micro-position translation stage with 1 um resolution as shown in Fig 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The focusing objective lenses are used to converge the laser beam providing a greater depth of field and smaller spot size as per the calculated requirement which is important for precision laser micro-machining process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Micro-position stage is used to move the sample as per the designed program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The computer-controlled laser power and micromachining system ensures that position errors and beam distortions are minimized over the entire scan region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 2: Schematics of: a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Waveguide-I cross section;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Waveguide- II cross section For this experimental study, two straight microchannels on separate surfaces of PDMS have been fabricated successfully with different focusing lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Both the microchannels are fabricated with different lasing power and focusing lenses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' First microstructure (larger microchannel) is fabricated with a width of 110 µm and a depth of 110 µm and the second microstructure (smaller microchannel) with a width of 14 um and a depth of 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='937 µm as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The larger microchannel has been fabricated by setting the laser power at 25 mW with a spot size of 15 µm (writing speed was kept 1mm/sec) and using multi-pass laser scan over the square shaped cross section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Based on multimode waveguide, the target cross-section is scanned 10 times horizontally and 5 times vertically with a beam overlap of 10 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fabrication of inlet and outlet has also been done by fs-laser using multi-pass laser scan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The smaller microchannel (waveguide I) as well, has been fabricated with multi-pass laser scan but with slightly different writing parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here laser power was taken as 18 mW with a beam spot size of 8 µm and horizontal scanning was done only twice with a beam overlap of 6 µm (writing speed 1mm/sec).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' After the measurement width of channel was found to be 14 µm and depth was 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='937 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In order to flow the dye solutions through fabricated channels, uniform inlet and outlet connected to central microchannels have also been fabricated with a multi-pass and multi scan using fs-laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Inlet as well as outlet for bigger microchannels measure 110 µm in width and 40 µm in depth and for smaller microchannel width was 110 µm and depth was 20 microns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In both the cases we have kept the depth of inlet and outlet less than the central microchannel, for easy flow of liquid in to it from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Femtosecond Laser M1 M2 BS CCD Objective Lens Sample Comp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Controlled Translation Stage PDMS Air PDMS Dye 14 μm 27 μm PDMS PDMS Air Dye 127 μm 127 μm The corresponding width and depth of developed microstructures have been confirmed by image analysis obtained with confocal microscope (Olympus LEXT OLS 4000) as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 3 above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This system capable of resolution up to 10 nm in Z direction and 120 nm in X-Y plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The super hydrophobic channels are effective in creating air cladding between the dye filled liquid core and solid walls of PDMS, thus providing a good coupling for TIR and the waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here, due to 2D wave guiding, scattering and diffraction of visible light still persists to the channel walls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Light undergoes TIR at the front end of the channel too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Due to femtosecond structuring on the PDMS material, the PDMS channel wall is also made hydrophobic which controls the losses of waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' After measuring the contact angle for femtosecond direct-written 2D microchannel as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 4, the hydrophobicity was checked for the contact surface modified due to exposure of femtosecond laser with similar parameters that one used to fabricate microstructures on PDMS respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It was found that channel has been converted into a hydrophobic channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These hydrophobic channels have low solid fraction that can effectively support the liquid-core/air cladding waveguide configuration on lab-on-chip platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Hence, this unique structure allows an effective control and flow of light from one end to other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' (a) (c) (d) (a) (b) Microchannel (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 3: (a) 2D waveguide structure-I over PDMS, (b) Cross section of Waveguide structure-I (c) 2D microstructure-II over PDMS (D) Cross section of microstructure-II Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 4: Contact angle measurement for (a) Plane PDMS surface and (b) for PDMS surface exposed with femtosecond laser Obg Inlet 480 320 160 160 320 480 640 Microchannel112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='3 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='332 64 42 /21 96 128 96 64 128 32Inlet 08 320 160 320 480 640 160 Microchannel320 480 640 0 160 Microchannel Implementation of microstructure as an optical waveguide The two fabricated microchannels, with 2D square and rectangle shape cross section respectively are filled with liquid dye medium in order to convert it into liquid based multimode waveguide microstructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The structures act as liquid-core waveguide platform when the refractive index (n) of cladding material (PDMS/air) is smaller than that of the flowing dye solution which acts as the core and enable the total internal reflection for the configuration of the index-guided mode [8, 9] The waveguide losses are also sensitive to the roughness of the surfaces of the waveguide walls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As the waveguide walls are pretty smooth in case of femtosecond fabrication, the losses are very much minimized in comparison to other conventional fabrication methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Other challenges and issues in these experiments are also resolved as gas (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', air) is used as cladding material [9, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Air has a much lower refractive index (nair=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='0) than most of the solid and liquid materials, thus it allows a wider range of incident angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Air also has much lower viscosity than that of any liquid so that it can significantly reduce the hydrodynamic friction and Joule heating at the interface between the core and the cladding [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Higher refractive index difference between the liquid core and air cladding (Δn= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='407) helps to increase the amount of light trapped inside the core and avoids the diffusional mixing problem normally observed in liquid to liquid L2 waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In presented case, two types of dyes have been used as the gain material to demonstrate the concept of liquid-air waveguide on a chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' First dye is Rhodamine-6G dissolved in ethanol and benzyl alcohol while the second one is Rhodamine-101 dissolved in mixture of ethanol + benzyl alcohol in a concentration range of 1mM to 5mM for both liquid core solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The corresponding change of refractive index of fluid observed by varying the dye solution concentration for both dye solutions is measured by the refractometer (Abbemat 500).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The refractive index difference of core and clad has been selected between 10-3 to 10-2 for index for varying concentration form of R6G and Rh101 from 1% to 10 %.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' From measurement, it is evident that dye solutions with different concentration can act as two different liquid core medium with varying characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For example, in case of 1mM concentration Rh-6G dye solution (n2=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='4030) in mix solution of (ethanol + benzyl alcohol) is higher than that of cladding material i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' air (n1= 1) and PDMS (n3= 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='40).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Liquid filled channel acts as a core in this case wherein light propagates through liquid core waveguide by satisfying condition of total internal reflection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This has been demonstrated through the resulting fluorescence emerging at the other end of the waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Characteristics are found to be drastically different between the gain materials as they are confined to the liquid-air interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' (e) (f) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 5: Ray-tracing simulation using FRED for two liquid waveguide structures looking from the top down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In both cases, core (liquid dye solution) indicated with the lightly shaded region which is embedded in the darker cladding region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' for multimode at liquid-air interface with 110 micron width (Waveguide II);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' for multimode at liquid-PDMS interface (Waveguide II);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' for multimode at liquid-air interface 14 micron width (Waveguide II) and d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' for multimode at liquid-PDMS interface (Waveguide II) ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Mode field distribution in case of liquid air interface for waveguide I;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Mode field distribution in case of liquid air interface for waveguide II (a) (b)(c) (d)(mt 100 Ax/s X Axis (uw) Local 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='3 3 once 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='2 2 n 0 0 Local X Axis 1 Local(mm 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='02 02 Axis 00 (mm) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='00 ueal 12 200 100 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='024 Axis .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='00 02 Axis Characterization For any waveguide structure, there is a range of ray angle that will fulfill the total internal reflection condition based on relative refractive index difference between the core and clad region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this case, dye solutions with different concentration act as the core medium and PDMS/air act as clad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The number of TIR for light is inversely proportional to the diameter or cross-section of microchannel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Ray tracing simulation platform (FRED) is used to understand the propagation of fluorescence light 532 nm through dye filled microstructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Optical losses at the liquid-air interface and liquid-PDMS interface in case of multimode and single mode microstructure respectively is obtained as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' To illustrate this, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 5 shows a ray-trace simulation of a liquid core waveguides.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Gaussian beam from a coherent laser source is coupled at the one end of waveguide with the help of 10X objective lens for both structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The laser light source is illuminated at the normal incidence of the waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Dye solution is filled inside the microstructure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Above simulation has been applied by considering the liquid dye with R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='4030 as the core medium embedded inside PDMS with R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='40 and air with R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' of 1 as the substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Outside the core, lower clad being PDMS (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='40) and upper clad being (Air =1), lower index region is formed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The result obtained for different cases, shows that light can be coupled inside the microstructure filled with 1mM concentrated dye solution and confirms its waveguide nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It also clears from this study that optical losses at liquid-air interface is comparatively less than that of liquid-PDMS interface irrespective of the dimensions of waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' However, dimensions of waveguide affect the total internal reflection per unit length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It is observed that waveguide structure with smaller diameter is more suitable to act as liquid mode guiding structure leading to increased probability of guiding more number of photons to reach the output end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' These results confirm that laser light can propagate through 2D liquid core waveguide structure by satisfying condition of total internal reflection over the interface of liquid core and PDMS/Air clad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' By above observations, it becomes clear that many complications and challenges can be easily overcome for propagating index guided mode when air is used as a cladding material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this experiment, we have filled the dye solution mix of ethanol and benzyl alcohol into two microchannels (15 mm length each), with 110 micron and 14 micron width respectively, on PDMS chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The end fire coupling method is used for optical characterization of the developed liquid waveguide structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The schematic of characterization set up is as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 6 above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here, the light from Nd:YAG laser is end coupled into waveguide I and waveguide II by using objective lens and assembly of optics is also shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The roughness of PDMS wall for 2D microchannel for both waveguide I and II were approximately limited to 1 micrometer due to the better quality of direct writing of femtosecond laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' To characterize the chip, we have used a micro syringe to insert the liquid dyes into the microchannels as the core medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The required liquid dyes for core medium are obtained by using ethanol + benzyl alcohol as the host solution with two different solutes Rh- 6G and Rh-101 to form two different dyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Respective mixtures of these two solutes in varying concentrations act as liquid cores within the two microstructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 6: Characterization setup for liquid core /Air Cladding waveguiding As the absorption spectra of Rh-6G and Rh-101 lies in visible wavelength therefore we have selected the Nd:YAG laser with 4 mW power and 7 nsec pulse duration with rep rate 10 Hz as the pump source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This Nd: YAG laser is used to excite the fluorescent dye molecules dissolved in the liquid core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The source is aligned to beam iris and 10X objective lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Across the objective lens beam spot size is reduced to~100 µm for waveguide II structure and 10 micron for waveguide I structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As the light and liquid are pumped simultaneously to the microchannel, due to high refractive index difference between liquid core and air, the fluorescence light is guided and captured at the other end of microchannel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The outlet end is connected to optical spectrometer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fluorescence spectrums are measured by changing the laser power and concentration of dyes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Model cross-sectional analysis for waveguide structures: In these two structures as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 3 and 7, first one is multimode waveguide II structure that allows multimodal tuning of waveguides from liquid core and other one waveguide 1 support few modes propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' To separate the fluorescence signal and excitation light, we need ‘Spectroscopic analysis and it is quite a difficult job to separate these two outputs over the output end of channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The intensity profile for fluorescent light generated and propagated through the developed liquid waveguide structures have been measured using ‘near-field intensity profile measurement’ experimental set up as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 6 above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The output profiles for both waveguide structures have been captured using CCD equipped with band-pass light filter for pump light (λ=532 nm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Intensity at the output end of liquid waveguide structure and corresponding intensity profile is shown in figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Profile measurements make it clear that the fabricated microstructures are supporting the index guided modes for the propagation and can be used as a waveguide like structure for various applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The small beam size (~100 µm) of the input beam, relative to that of the liquid core (100 µm), helps in reducing the coupling losses of pump light at the cross-section of the microchannel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Increment in the coupling and propagation losses are due to the increasing effects of the scattering and diffraction of the visible light through the PDMS channel walls (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', air/dye solution/PDMS interfaces at the front and the end) with a normal incident angle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' LASER M1 M2 10xObikns OSA 10XObjlens BeamPellicke Inket Oullet PDMS Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 7:Intensity distribution for light propagating through: (a) Waveguide I and (b) multimode Waveguide II liquid core/air clad cross section Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 8: Comparative studies of emission spectra for Waveguide I, Waveguide II structure and cuvette for (a)Rh-6G and (b) Rh-101 dye solution Results and discussion In order to confirm waveguide nature of dye filled 2-D microstructures, we have studied the fluorescence spectroscopy for 3mM concentration of dye (Rh-6G) as a liquid medium in three different configurations i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') quartz Cuvette, b) waveguide II structure and c) waveguide I structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The fluorescence emission spectra are collected for three different structures in order to obtain the effect of microstructure dimensions on the emission output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It is observed that emission spectral peak wavelength is changed by 15 nm from microstructures to cuvette filled with same dye solution Rh-101 and pumped to a uniform Nd: YAG laser at 4 mW power as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Similar shift has been observed in case of Rh-6G which shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Increase in output photon density confirms the coupling of FL inside the waveguide structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It is also clear from the above fig that FWHM of FL spectra gets narrower from Cuvette to waveguide structure I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The spectral narrowing effect is observed due to the Fabry-Perot resonator formed by dye solution filled liquid waveguide and solvent-air interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This result confirms that fluorescence light generated by dye solutions gets coupled through microchannel and forms Fabry-Perot type oscillations which lead us to the conclusion that 2D structure fabricated on the surface of PDMS functions as a liquid core/air cladding waveguide structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In addition, consideration of these two waveguides and quartz cuvette confirms that dynamics of fluorescence spectra also changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The intensity, lasing peak and line width change according to dimensions of individual structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Same results are observed for Rh-101 dye solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' FWHM of fluorescence signal of quartz Cuvette is observed 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='8 nm and peak wavelength at 637.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='59 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In multimode waveguide II for Rh-101 dye, line width achieved is 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='53 nm, peak wavelength is 624.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='10 nm and that for waveguide I structure line width is 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='94 nm and peak wavelength is 623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='75nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of Rh-6Gdye solution, FWHM for Cuvette is 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='89 nm and peak wavelength is 580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='90 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For multimode waveguide II structure is 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='52 nm, peak wavelength is 573 nm and for Waveguide Waveguide II (a) (b) Fluorescence emission on cross section Pump Pump Fluorescence emission on cross section Rh 6G (a) Cuvette Waveguide II Waveguide I Rh 101 (b) Cuvette Waveguide II Waveguide I 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='9 Intensity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='8 Normalized 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='4 0 5 10 15 20 25 30 35 WaveguideDepth (um)70000 Cuvette 60000 Multimode .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Single mode A 50000 Count 40000 30000 20000 Ph 10000 0 540 560 580 600 620 640 Wavelength(nm)70000 Cuvette 60000 Multimode 3 Singlemode A 50000 Count 40000 hoton 30000 20000 P 10000 0 580600620640660680700720740 Wavelength(nm)Intensity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='8 Normalized 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='0 0 20 40 60 80 100 120 WaveguideDepth (um) waveguide I structure linewidth reduces to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='34 nm, peak wavelength is shifted at 573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='70 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Through a comparison study, it has been observed that peak wavelength in multimode waveguide I and II structure is quite less (blue shifted) compare to Cuvette output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In the present study, we can see for quartz Cuvette the output florescence spectrum has a large bandwidth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Due to the small dimensions of microchannels, the obtained graph clearly indicates that the linewidth of waveguide structure II is less than that of Cuvette and waveguide structure I have even lower line width compared to the structure II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Effect of power for higher concentration regime For characterization of these FS written microchannels in terms of multimode waveguide microstructures I and II, we have studied the effect of pump power for Rh-6G and Rh-101 dye solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It has been observed that with variation in the pump power, a significant tunability has been observed in fluorescence spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' All these measurements have been observed at the room temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='9 illustrates the measured emission spectra with Rh-6G for 10 mM in both liquid core/air waveguide structure I and II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here we have varied the input power in a range of 4 - 12 mW for both cases and observed that for lower concentrations, insignificant change was observed in the fluorescence peak wavelength in correspondence to the variation in incident laser power but for 10 mM, peak wavelength shift has been observed as power is varied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' A florescence peak wavelength count emerges as optical pumping power density is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The absorption of incident laser beam is responsible for change in refractive index gradient of dye solution of the order of 10-3 to 10-4 due to optically heated thermal lensing effect [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Also, incident pulsed high-power laser beam generates acoustic pressure waves inside the dye filled liquid waveguide structure which induce the variation in the refractive index of medium [11, 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this way, incident laser power plays significant role in the shift of florescent peak wavelength and output spectrum which is reflected in the experimental results as shown in the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 9 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In low concentration regime, isolated dye molecules are present but as we increase the concentration of dye, the spacing between dye molecules decreases and aggregates are formed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Thus, peak wavelength variation can be seen in very high concentration regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The other phenomenon which contributes to the modified output spectra of dye is ‘self-absorption’ due to higher concentrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As the molecular dimmer are formed at high concentration, it explains the appearance of a second shift in measured fluorescence spectroscopy such that red shift is observed for 10 mM dye concentration by varying the power from 4 mW to 12 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' From Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 9, we can clearly observe the peak wavelength for multimode waveguide structure II for Rh6G solution was achieved at 579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='8 nm at 4mW pump power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As the power increases to 6mW, peak wavelength is shifted at 581.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='42 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' By varying to higher power, red shifted peak wavelength is reached up to 583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='25 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Same experiment has been repeated for Rh-101 dye solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' We took 10 mM solution and measured the fluorescence spectra for multimode waveguide structure II at 2 mW, the peak wavelength is captured at 626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='48 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The amount of light guided inside the multimode mode waveguides I and II are strongly dependent on the refractive index difference between ncore and nclad as: ∆������������ = ������������������������������������������������ − ������������������������������������������������ The Rh-6G and Rh-101 are dissolved in mixture of ethanol +benzyl alcohol as a host solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Rh 6G Waveguide I Rh 6G Waveguide II Rh 101 Waveguide I Rh-101 Waveguide II (c) (d) (b) (a) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 9: Study of pump power-based fluorescence emission spectra for(a) Rh6G in waveguide structure I (b) Rh6G in multimode waveguide structure II (c) Rh101 in waveguide structure I and (d) Rh101 in waveguide structure II (A=4mW, B=6 mW, C=8mW, D=10mW E=12mw) 70000 A B 60000 c 3 3 50000 E Counts 40000 Photon 30000 20000 10000 0 560 570 580 590 600 610 620 630 Wavelength (nm)70000 60000 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') 50000 Counts 40000 Photon 30000 20000 10000 0 580 600 620 640 660 680 Wavelength (nm)70000 60000 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') 50000 Counts 40000 Photon 30000 20000 10000 0 550 560 570 580 590 600 610 Wavelength (nm)70000 60000 (wu) 50000 Counts 40000 Photon 30000 20000 10000 0 580 600 620 640 660 680 Wavelength (nm) The experiment is repeated for waveguide structure I for same solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Light is coupled from the output end of waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Light after being guided inside the waveguide structure I is observed at the cross section of the waveguide and we can observe in the graph that the peak wavelength and line width for same power and concentration changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' There is slight change in Peak wavelength but line width drastically changes in the waveguide I compared to the waveguide II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For waveguide structure I, the red shift in to fluorescence emission peak for both dyes caused by the variation pump power from 4 mW to 12 mW with step size of 2 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The corresponding tunability achieved is in the range of 579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='87-583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='25 nm and average line width is 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='8 nm in case of Rh-6G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For the waveguide structure I, in case of Rh-101 based active solution, tunability achieved is 7 nm and observed average line width is 6 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For Rh-101 dye in waveguide structure I tunability achieved is in the range of 620.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='49- 628.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='44 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In the case of waveguide structure II, for Rh-6G, red shift in peak wavelength has been observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Tunability of peak wavelength being 4 nm and average line width being 10 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In the case of Rh-101, the tunability of 6 nm is achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For multimode waveguide structure II in case of Rh-101, spectral tunability is achieved in the range of 626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='48 - 632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='50 nm and average line width is 10 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of Rh-6G, for same multimode waveguide II, spectral tunability is achieved in the range of 579.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='87-583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='25 nm and average FWHM line width is 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='5 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Effect of concentration The tunability in output band of liquid filled microstructures is mainly determined by selection of dye solution and its solubility limit to highly dilute systems of Rh-6G and Rh-101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of lower concentration regime (concentrations of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='1 mM), component of self-absorption is quite significant which decrease the intensity of signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In addition, at higher concentrations regime (concentrations of 10 mM), intermolecular self-quenching rapidly decreased the output intensity [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Particularly, in high concentration regime, the Rh-6G and Rh-101 molecules arrange themselves into H type and J type dimmers [14, 15 & 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This dimmer formation changes the electronic structure and as a result, the output emission spectrum is also changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this way, the variation in the concentration of liquid medium provides an optical flexibility for liquid waveguide structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The experimental observation for spectral dependency of liquid waveguide structures for varying concentration of Rh-6G and Rh- 101 dye solution at fixed pump power is as shown in below Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The detailed analysis of output spectra for Rh-6G concentrations ranging from 1 mM to 4 mM and Rh-101 concentrations ranging from 1 mM to 5 mM have been done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It was observed that the spectral position of the propagating mode through the liquid waveguide structure shifts toward longer wavelengths by increasing the concentration of dye solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of waveguide I filled Rh-6G solution, the peak wavelength shift observed from 573.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='16 nm to 580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='67nm for 1mM to 4 mM concentration change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Along with peak wavelength, average line width shift is also observed from to 5 -6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='01 nm for the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For Rh- 101 filled waveguide I , 5nm shift in peak wavelength and ± 2 nm sift in line width is observed when concentration changes from 1 mM to 5 mM respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 10 shows, the wavelength of the peak maximum is red-shifted with varying concentration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The same experiments were carried out for multimode waveguide structure II for both dye solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Similarly, spectral study for different concentrations in multimode structure II for Rh-6G dye, 8 nm red shift in peak wavelength and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='5 nm shift in linewidth have been observed while 5 nm peak wavelength red shift with ± 2 nm linewidth shift has been observed for Rh-101 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here, the peaks occurred at different wavelengths as per the changing concentration of liquid medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Red shift in the output spectra is observed when concentration is increased from 1 mM to 4 mM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The apparent red shift in the emitted intensity signal is due to the small Stokes shift of Rh-6G and the large spectral overlap in absorption and emission [13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Same observations have been seen for Rh- 101 solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The optimum optical absorption of pump beam, inside the dye solution filled microchannel is achieved at concentration of 1 mM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Photo bleaching effect in microstructure The rate of photo bleaching primarily depends upon the type of dye, host material and their optical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Additionally, illumined intensity of source, wavelength of source, exposure time and temperature also affect the extent of photo bleaching [16, 17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Photo bleaching is not a desirable phenomenon for lab-on chip based optofluidic waveguides and optofluidic lasers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' It destructs the continuous output of miniaturized device and limits its usage to short time periods only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here, we have studied the photo bleaching effect in waveguide structure I and II for both Rh-6G and Rh-101 dye mediums.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This study helps us to design and improve upon the functionalities of optofluidic chips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' As a consequence of photo bleaching due to the long exposure of pump intensity to the liquid active medium, the fluorophores lose the ability to emit fluorescence in the same magnitude of intensity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The linewidth and intensity of florescence output have been significantly changed due to photo bleaching effect in the liquid waveguide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Due to diffusion dynamics in the presence of on chip reservoirs, in case of micro dye lasers, the supply of unbleached dye solution on faster time scale is not required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In the studied case, length of microchannel is 15 mm and width is 110 micron (W/L= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='0073) for waveguide structure II and for waveguide I (W/L= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='00093).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' For both the cases longitudinal coupling of light have been done in slit area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Photo bleaching time for waveguides can be converted to just a few minutes without using any costly liquid handling devices and replacement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Here, we have used the static phenomenon of liquid waveguides without using the external fluidic handling systems such as syringe pumps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The experimentally observed fluorescence dynamics is in qualitative agreement with the bleaching-diffusion dynamics [17, 18 & 19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In Microsystems, photo bleaching creates unwanted intensity changes in the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The quantum yield of photo bleaching and the molar extinction coefficient are inherent properties of Rhodamine-6G and Rhodamine-101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of static measurements inside microstructures,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' most affricating factors for photo bleaching inside waveguide structures filled with diluted solutions can be determined by applying Beer’s law as [14]: ������������������������������������������������ = ������������������������������������������������������������������������ (−������������������������������������������������������������������������������������������������������������������������������������) Aout is the amount of emitting molecules remaining after photo bleaching;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Ain is the original concentration of absorbed dye .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Rh 6G waveguide I Rh101 waveguide I Rh101waveguide II Rh 6G (waveguide I) (b) (c) Rh 101 (waveguide II) Rh 101 (waveguide I) (d) (a) Rh 6G (waveguide II) (c) (d) Rh-6G waveguide II (b) (a) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 10: Studies of concentration variation-based fluorescence emission spectra for multimode waveguide structures I and II for Rh6G and Rh101: (a) Rh6G filled structure I (b) Rh6G filled multimode structure II (c) Rh101 filled structure I (d) Rh101 filled multimode structure II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=" 11: Photobleaching studies for (a) Rh6G in multimode structure II (b) Rh6G in structure I (c) Rh101 in multimode structure II and (d) Rh101 in waveguide structure I 70000 1 mM (n'v) 60000 2 mM 3 mM 50000 4mM Counts 40000 30000 Photon 20000 10000 0 560 580 600 620 640 Wavelength (nm)70000 60000 o S A." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 10 S 50000 20 S Count 30 S 40 S 40000 60 S 70 S Photon 30000 20000 10000 0 550 560 570 580 590 600 610 Wavelength (nm)70000 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') oS 60000 25 S 50 S 50000 75 S Counts 100 S 40000 125 S 150 S hoton 175 S 30000 200 S 225 S P 20000 10000 580 600 620 640 660 680 700 Wavelength (nm)70000 10 S 60000 20 S 30 S (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') 50000 40 S Counts 50 S 60 S 40000 70 S 80 S hoton 30000 90 S 20000 10000 0 580 600 620 640 660 680 Wavelength (nm)70000 os 60000 20 S (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=') 40 S 50000 60 S Count 80 S 40000 100 S 120 S Photon 30000 140 S 160 S 20000 180 S 10000 0 560 580 600 620 Wavelength(nm)70000 1mM 60000 2 mM 3 mM 50000 4mM Counts 40000 Photon 30000 20000 10000 0 560 580 600 620 640 Wavelength (nm)70000 1 mM 60000 2 mM (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U) 3 mM 50000 4mM Count 5 mM 40000 30000 20000 10000 0 600 610 620 630 640 650 660 Wavelength (nm)70000 1 mM 2 mM (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='U 60000 3 mM 4 mM Counts 50000 5 mM 40000 noton 30000 20000 10000 0 600 610 620 630 640 650 Wavelength(nm) molecules, I0 is the incident light irradiance, Qph is the quantum yield of photo bleaching and te is the exposure time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' From the above equation, it is clear that quantity of photo bleached molecules inside the solution is exponentially dependent on exposure time and pump intensity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Therefore, even a small increase in time or light intensity results in a substantial increase in the amount of photo bleaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Our experimental results reveal that these optofluidic waveguides can be operated over a few minutes without needing a flow of fresh dye solution as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In case of Rh-6G solution, photo bleaching time is observed to be 70 sec for waveguide I and 180 sec for multimode waveguide structure II, while in case of Rh-101, photo bleaching time is observed to be 90 sec and 225 sec for multimode waveguide structure I and II respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This experiment confirms that decay time of Rh-101 is slightly greater than that of Rh-6G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This observed behavior is justified by previous publication [17, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The photo bleaching time can further be improved by a factor of 3 to 4 times by adding reservoirs on chip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Also, by converting the fabricated 2D structures into a 3D chip and using different pumping scheme, the developed liquid waveguide structure can be used in established optofluidic devices with enough output which would be sufficient and even more than is required to do the lab-on-chip experiments Conclusion: In conclusion, we have demonstrated a novel femtosecond fabricated liquid-core/air-clad waveguide microstructures on a PDMS microchip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' We have studied the role of concentration, photo- bleaching and incident power on the output of waveguides in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This work gives a very good understanding towards the interaction of light and fluid in micro dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Tunability in the form of intensity, wavelength and linewidth has been successfully obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' The characteristic of these waveguide sources can be easily controlled and modulated by adjusting the fluid properties of the core medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' After converting these 2D chips into 3D chips and adding some optical component to the same, the liquid waveguide source can be made into a tunable optofluidic laser having a coherent light source that can be integrated with multifunctional lab-on chip systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' In this way, fluorescence measurement and detection by optofluidic devices can provide a powerful platform for analysis of biological systems and aid significantly in medical diagnostics and chemical detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' This research gives a brief idea about development and maintenance of highly functional lab-on- chip waveguides which can be used out of laboratory also for many applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Acknowledgement: We acknowledge the support provided by CMTI Bangalore, India for femtosecond micromachining fabrication facility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Reference: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Helbo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Kristensen, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Menon, “A micro-cavity fluidic dye laser,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Micromech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Microeng, 2003, 13(2),307–311.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Psaltis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Quake, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Yang, “Developing optofluidic technology through the fusion of microfluidics and optics,” Nature, 2006, 442(7101), 381–386.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Li and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Psaltis, “Optofluidic dye lasers,” Microfluid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Nanofluidics 2008, 4 (1-2), 145–158.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Lin Pang,* H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Matthew Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Optofluidic devices and applications in photonics, sensing and imaging” Lab on a Chip, 2012, 12, 3543–3551.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Chang-Yen, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Eich, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Gale, “A monolithic PDMS waveguide system fabricated using soft-lithography techniques,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Lightwave Technol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', 2005, 23(6), 2088– 2093.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Prashanth Reddy Konari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=',“Experimental Analysis of Laser Micromachining of Microchannels in Common Microfluidic Substrates” Micromachines, 2021, 12, 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Felix Sima, Koji Sugioka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=',“Three-dimensional femtosecond laser processing for lab-on-a-chip application”, Nanophotonics, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 7(3): 613–634.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “A tunable 3D optofluidic waveguide dye laser via two centrifugal Dean flow streams”, Lab on a Chip, 2011, 11, 3182.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Stijn Vandewiele et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Single-mode air-clad liquid-core waveguides on a surface energy patterned substrate”, OPTICS LETTERS, 2014, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 39, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' PengFe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=',“A compact optofluidic cytometer with integrated liquid-core/PDMS-cladding waveguides, Lab Chip, 2012, 12, 3700–3706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='k Mishra et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' “Measurement of Thermo Optical Coefficient for Commonly used Dye Solvents”, International journal of photonics and optical technology, 2018,Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 4, Iss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 2, pp: 12-16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Shane M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Eaton,Carmela De Marco,Rebeca Martinez- Vazquez,Roberta Ramponi,Stefano Turri,Giulio Cerullo,Roberto Osellame, “Femtosecond laser microstructuring for polymeric lab-on-chips” Journal of Biophotonics, 2012, 5(8-9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Penzkofer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='eupacher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “ Fluorescence behaviour of highly concentrate rhodamine 6G solutions”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' journal of Luminescence, 1987,37, 61-72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Florian M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Zehentbauer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Fluorescence spectroscopy of Rhodamine 6G: Concentration and solvent effects”, SpectrOChimica Acta Pan A: Molecular and Biomolecular SpectrOscopy, 2014, 121() 147-151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Noack, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Kiefer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content='saipem, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Concentration dependent hydrogen bonding effects on the dimethyl sulfoxide vibrational structure in the presence of water, methanol and ethano”l, ChemPhysChem 2010, 11, 630- 637.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' VJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Gavrilenko, MA Noginov, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Ab initio study of optical properties of Rhodamine 6G molecular dimers”, Journal of Chemical Physic, 2006s 124, 044301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Morten Gersborg-Hansen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Bleaching and diffusion dynamics in optofluidic dye lasers”, APPLIED PHYSICS LETTERS, 2007,90, 143501.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Jerker Widengreny et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Mechanisms of photobleacing investigated by fluorescence correlation spectroscopy”, Bioimaging, 1996, 4, 149–15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Mingyu Chapma et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=', “Rhodamine 6G Structural Changes in Water/Ethanol Mixed Solvent”, Journal of Fluorescence, 2018, 28:1431–1437.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' Julien Laverdant et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} +page_content=' , “Experimental Determination of the Fluorescence Quantum Yield of Semiconductor Nanocrystals”, Materials, 2011, 4, 1182-1193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/J9FIT4oBgHgl3EQfZytY/content/2301.11254v1.pdf'} diff --git a/JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss b/JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..de9c52ee4235fbce6ad4c2020a4048f4ccf06696 --- /dev/null +++ b/JdA0T4oBgHgl3EQfCf9p/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d180a1f49a684197a37a2bc57788ffec5ee88820d39044028daed2f1140edad8 +size 6225965 diff --git a/JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss b/JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..dca9faeef605f3c5fa3e8bb8891b2ad945b6e15f --- /dev/null +++ b/JtFJT4oBgHgl3EQfwi0E/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f4e0826d705b484005e7a0d313f0b13d4e2b062554806193aa4c011d28d8525 +size 5046317 diff --git a/KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss b/KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..491cdcaf2c4c6f1340d6e50f0495c1c3385fdbce --- /dev/null +++ b/KNA0T4oBgHgl3EQfCv9N/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43d9046d19b452c65829b6fd922e9bcfe740dcc714a437a1695a1b8eee2a30e8 +size 7340077 diff --git a/L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf b/L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..640767dff249094f1a0a0618fdb76a02f725b643 --- /dev/null +++ b/L9E1T4oBgHgl3EQfHAM0/content/2301.02920v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4397fc0f169b7d9fd6522e521963142e0319fba5f1a101814c4a98f2abe5a8bf +size 260261 diff --git a/L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss b/L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..ad6b52dfa9b362c51eb2a229b77a4f052cf2e8fb --- /dev/null +++ b/L9E1T4oBgHgl3EQfHAM0/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:946dc563b6f0d8ff283fd7346d44b5dfd98b76f52ca827f489674649054de826 +size 5832749 diff --git a/L9E1T4oBgHgl3EQfHAM0/vector_store/index.pkl b/L9E1T4oBgHgl3EQfHAM0/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..942c937da2a8900532a1fbdc8eddc0cbc064fd96 --- /dev/null +++ b/L9E1T4oBgHgl3EQfHAM0/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:609f918d85686261924ef9d817a49e728c7ef71963b523765a5ffa5eb84ef7a7 +size 186435 diff --git a/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/2301.02921v1.pdf.txt b/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/2301.02921v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..4416afdf92f1aca88d0b10324e4092393145cbd1 --- /dev/null +++ b/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/2301.02921v1.pdf.txt @@ -0,0 +1,1306 @@ +arXiv:2301.02921v1 [math.AP] 7 Jan 2023 +Non-local optimized Schwarz method +with physical boundaries +X.Claeys1 +1Sorbonne Université, Laboratoire Jacques-Louis Lions +Abstract +We extend the theoretical framework of non-local optimized Schwarz methods as in- +troduced in [Claeys,2021], considering an Helmholtz equation posed in a bounded cavity +supplemented with a variety of conditions modeling material boundaries. The problem is +reformulated equivalently as an equation posed on the skeleton of a non-overlapping parti- +tion of the computational domain, involving an operator of the form "identity + contrac- +tion". The analysis covers the possibility of resonance phenomena where the Helmholtz +problem is not uniquely solvable. In case of unique solvability, the skeleton formulation +is proved coercive, and an explicit bound for the coercivity constant is provided in terms +of the inf-sup constant of the primary Helmholtz boundary value problem. +Introduction +Large scale simulation of harmonic wave propagation phenomena remains a challenge in the +context of which one of the most effective substructuring domain decomposition methods +(DDM) was introduced by Després [10]. Commonly referred to as Optimized Schwarz Method +(OSM), it consists in local solves of the wave equation, maintaining a coupling between sub- +domains through a reformulation of transmission conditions in terms of ingoing and outgoing +Robin traces. The new transmission conditions involve an exchange operator that swaps traces +from both sides of each interface between neighboring subdomains. This approach was put +in a general theoretical framework in [9] and we point to [14] for an overview of this type of +strategy. +In a discrete setting, the appropriate definition of the exchange operator raises issues at +cross-points, where at least three degrees of freedom have to communicate, because it is then +unclear what should be the discrete counterpart of swapping. Although several heuristics had +been proposed in the literature for dealing with this situation [12, 13, 19, 11, 1], most strategies +based on this local swapping operator experienced deteriorated performance in the presence +of cross points. +In a series of articles [5, 6, 7, 8], we proposed a variant of OSM where the usual local swap- +ping exchange operator is replaced by an alternative a priori non-local operator that naturally +accommodates the presence of cross-points. This new approach can cope with arbitrary sub- +domain partitions, with a possibly very complicated wire basket. In [5], we analyzed this new +approach at the continuous level considering a transmission problem posed on the full space +1 + +Rd, and the formulation associated to this new DDM strategy was proved strongly coercive, +which paved the way to convergence estimates for linear solvers (e.g. Richardson, GMRes). +This novel approach was adapted to a finite element discretised setting and a full conver- +gence theory was developed in [8, 6]. In passing, this new theoretical framework covered the +case of the original Després algorithm hence offering a genuine generalization. The whole the- +ory was confirmed by numerical results both in 2D and 3D. While the previous developments +were concerned with scalar harmonic wave propagation, the case of Maxwell’s equations was +considered in [7, 20]. +In the present contribution we extend the theory of [5] in several directions. First of all, while +[5] considered only the case of a transmission problem posed on the whole of Rd, we consider +here the case of a cavity problem posed in a bounded domain Ω ⊂ Rd. This boundary value +problem takes the form +div(µ−1∇u) + κ2u = −f in Ω ++ boundary condition on ∂Ω. +(1) +Here again we reformulate it as an equation in terms of traces posed on the skeleton of the +subdomain partition, which we call skeleton formulation. While in previous contributions the +problem had been assumed uniquely solvable (see e.g. [8, §1] or [6, §1.2]), the analysis is +here extended so as to cover the case where (1) is not necessarily uniquely solvable which +covers the case of non-trivial resonance phenomenon. The skeleton formulation is then proved +uniquely solvable if and only if this holds for (1) and, if this condition is fulfilled, the skeleton +formulation is proved to be strongly coercive. Although coercivity was already established +in [5], we provide in addition an explicit estimate of the coercivity constant in terms of the +inf-sup condition of the primary variational formulation. +Our whole analysis rests on an interpretation of the properties of (1) in terms of a pair of +two closed linear manifolds: one that models transmission conditions, and another one that +models local wave equations. Studying properties of operators by means of pairs of closed +linear manifolds follows the spirit of [16, iv.4 & iv.5]. +Like [5], the present contribution is purely theoretical. It aims at laying solid analytical +foundations for a better understanding of the spectral properties of the skeleton formulation, +which is important in the perspective of devising both computationally efficient eigensolvers +and domain decomposition preconditionners. We do not provide any numerical experiment. +Such results shall be presented in a forthcoming contribution that will develop a discrete +variant of the present analysis, in the spirit of [8, 6]. +The outline of this article is as follows. In the first two sections we introduce general notations +for both Hilbert analysis and Sobolev spaces, including trace operators, Dirichlet-to-Neumann +maps and harmonic liftings. Next we describe the problem under study, specifying precisely +the assumptions underlying our analysis, which allows in particular to deal with a variety +of boundary conditions. How to apply this framework for common boundary conditions is +illustrated with examples. Further notations are introduced for dealing with multi-domain +configurations. This leads in particular to a characterization of transmission conditions based +on a non-local exchange operator, see Proposition 4.3, which had been an important innovation +of [5]. We use this multi-domain formalism to re-express the boundary value problem under +study. The kernel and the range of this operator are then re-interpreted in terms of a pair of +closed linear manifolds. One manifold models wave equations local to each subdomain, and +2 + +the other one models transmission conditions. Wave equations local to each subdomain are +then re-expressed by means of a so-called scattering operator, which we use to finally provide a +formulation involving tuples of Robin traces on the skeleton of the subdomain partition. This +skeleton formulation is proved to systematically admit closed range, and its kernel is put in +correspondence with the kernel of the original formulation. Finally we prove strong coercivity +for the skeleton formulation and derive an estimate for the coercivity constant that is explicit +with respect to the inf-sup constant of the original variational formulation. +1 +General notation conventions +We first set a few general notation conventions regarding analysis in Banach spaces. All vector +spaces that we are going to consider have C as scalar field. Assuming that H is a Banach +space equipped with the norm ∥ · ∥H, its topological dual denoted H∗ will systematically be +equipped with the norm +∥ϕ∥H∗ = +sup +v∈H\{0} +|ϕ(v)| +∥v∥H +. +(2) +The canonical duality pairing will be systematically denoted ⟨·, ·⟩ : H∗×H → C and defined by +⟨ϕ, v⟩ := ϕ(v). Although the space H does not appear explicitly in the notation "⟨ϕ, v⟩", when +such pairing angle brackets are used, it shall be clear from the context which pair of spaces +(H, H∗) is under consideration. We emphasize that the duality pairings we consider do not +involve any complex conjugation. We shall write ⟨v, ϕ⟩ = ⟨ϕ, v⟩ ∀v ∈ H, ϕ ∈ H∗ indifferently. +For any subset X ⊂ H, we denote its polar set by +X◦ := {ϕ ∈ H∗, ⟨ϕ, v⟩ = 0 ∀v ∈ X}. +(3) +Assuming that V is another Banach space equipped with the norm ∥ · ∥V, and L : H → V is a +bounded linear map, we shall refer to its inf-sup constant denoted and defined as follows +infsup +H→V +(L) := +inf +u∈H\{0} +∥L(u)∥V +∥u∥H +(4) +In the case where L is invertible, this inf-sup constant equals the inverse to the continuity +modulus of L−1. The inf-sup constant is well defined even if L is not invertible though. The +adjoint to the map L : H → V shall be defined as the unique bounded linear map L∗ : V∗ → H∗ +satisfying +⟨L∗(p), u⟩ := ⟨p, L(u)⟩ +(5) +for all p ∈ V∗ and all u ∈ H. Once again, we insist that no complex conjugation comes into +play in (5). The bounded linear map L induces another bounded linear map L : H → V +defined by L(u) := L(u) for all u ∈ H. +A bounded linear operator T : H → H∗ is called self-adjoint if T = T∗ and, in this case we +have ⟨T(u), u⟩ ∈ R for all u ∈ H. It is called positive definite if ⟨T(u), u⟩ ∈ (0, +∞) for all +u ∈ H\{0}. If T is both self-adjoint and positive definite, the sesquilinear form u, v �→ ⟨T(u), v⟩ +induces a scalar product over H and the associated norm is denoted +∥u∥T := +� +⟨T(u), u⟩. +(6) +3 + +We shall also consider cartesian products H1 × · · · × HJ where each Hj is a Banach space +equipped with the norm ∥ · ∥Hj. +Then the cartesian product shall be equipped with the +following canonical norm and duality pairings +∥v∥2 +H1×···×HJ := ∥v1∥2 +H1 + · · · + ∥vJ∥2 +HJ +⟨v, q⟩ := ⟨v1, q1⟩ + · · · + ⟨vJ, qJ⟩. +(7) +for v = (v1, . . . , vJ), vj ∈ Hj, and q = (q1, . . . , qJ), qj ∈ H∗ +j. If Vj, j = 1, . . . , J is another +collection of Banach spaces and Lj : Hj → Vj are bounded linear maps, we shall also consider +the block-diagonal operator diag(L1, . . . , LJ), mapping H1 × · · · × HJ into V1 × · · · × VJ and +defined, for v = (v1, . . . , vJ), and q = (q1, . . . , qJ), by +⟨q, diag(L1, . . . , LJ) v⟩ := ⟨q1, L1(v1)⟩ + · · · + ⟨qJ, LJ(vJ)⟩. +2 +Single domain functional setting +Now we need to introduce classical function spaces. +For any Lipschitz open set ω ⊂ Rd, +we consider L2(ω) := {v : ω → C measurable, ∥v∥2 +L2(ω) := +� +ω |v(x)|2dx < +∞} and define +Sobolev spaces +H1(ω) := {v ∈ L2(ω), ∇v ∈ L2(ω)d} +∥v∥2 +H1(ω) := ∥∇v∥2 +L2(ω) + γ−2∥v∥2 +L2(ω) +(8) +where γ > 0 is a real positive parameter. Incorporating γ-dependency in the norm will allow +to establish γ-uniform estimates in the sequel. The space H1 +0(ω) will refer to the closure of +D(ω) := {ϕ ∈ C ∞(Rd), supp(ϕ) ⊂ ω, supp(ϕ) bounded} for ∥ · ∥H1(ω). +Next we introduce the space of Dirichlet traces H1/2(∂ω) := {v|∂ω, v ∈ H1(Rd)} equipped with +the quotient norm ∥v∥H1/2(∂ω) := min{∥ϕ∥H1(Rd), ϕ ∈ H1(Rd) and ϕ|∂ω = v}. The topological +dual to H1/2(∂ω) will be denoted H−1/2(∂ω) = H1/2(∂ω)∗. As detailed for example in [17, +Thm.3.38], the trace map gives rise to a bounded linear operator +Bω : H1(ω) → H1/2(∂ω) +Bω(v) := v|∂ω +∀v ∈ D(Rd). +(9) +We underline that Bω refers to the trace taken from the interior of ω. The norm (8) gives rise +to a natural right-inverse of this Dirichlet boundary trace operator. We define the harmonic +lifting operator B† +ω : H1/2(∂ω) → H1(ω), see [21, §1.2.2.4], through norm minimization +Bω · B† +ω(v) = v +∀v ∈ H1/2(∂ω) and +∥B† +ω(v)∥H1(ω) := min{∥φ∥H1(ω), Bω(φ) = v, φ ∈ H1(ω)}. +(10) +Denote H1(∆, ω) := {v ∈ H1(Ω), ∆v ∈ L2(Ω)} and let nω refer to the unit normal vector +field to the boundary ∂ω directed toward the exterior of ω. +The Dirichlet trace operator +ϕ �→ ϕ|∂ω, resp. the Neumann trace operator ϕ �→ nω · ∇ϕ|∂ω, can be extended by density as +a bounded linear map H1(ω) → H1/2(∂ω) resp. H1(∆, ω) → H−1/2(∂ω), see e.g. [17, Lem.4.3]. +4 + +The Dirichlet-to-Neumann (DtN) map Tω : H1/2(∂ω) → H−1/2(∂ω) is defined as the unique +bounded linear operator satisfying +Tω(φ|∂ω) := nω · ∇φ|∂ω +∀φ ∈ H1(∆, ω) satisfying +− ∆φ + γ−2φ = 0 +in ω. +(11) +This is a real valued and self-adjoint operator Tω = Tω and T∗ +ω = Tω which induces a scalar +product over H+1/2(∂ω) and the Neumann-to-Dirichlet map T−1 +ω +: H−1/2(∂ω) → H+1/2(∂ω) +induces a scalar product over H−1/2(∂ω). We set +∥v∥2 +Tω := ⟨Tω(v), v⟩ +∥p∥2 +T−1 +ω +:= ⟨T−1 +ω (p), p⟩. +(12) +It is a well established fact (see e.g. +[21, Def.1.41] or [23, §6.6.3]) that ∥ · ∥H1/2(∂ω) and +∥·∥H−1/2(∂ω) are equivalent to the norms (12). Applying the Euler equation characterizing the +harmonic lifting B† +ω(v) as unique solution to the minimization (10), see e.g. [4, Thm.7.2-1], +we have −∆B† +ω(v) + γ−2B† +ω(v) = 0 in ω, so that Tω(v) = nω · ∇B†(v)|∂ω. We also deduce +that ∥φ|∂ω∥Tω = ∥B† +ω(φ|∂ω)∥H1(ω) ≤ ∥φ∥H1(ω) for all φ ∈ H1(ω) and, in particular, we have +the inequalities +∥B† +ω(v)∥H1(ω) = ∥v∥Tω +∀v ∈ H1/2(∂ω), +∥Bω(u)∥Tω ≤ ∥u∥H1(ω) +∀u ∈ H1(ω). +(13) +3 +Single domain variational formulation +The next step in our analysis will consist in writing Problem (1) in a variational form able to +cope with a variety of boundary conditions. This is why we treat the boundary condition by +means of an additional Lagrange parameter. Let Ω ⊂ Rd, Γ := ∂Ω refer to an open bounded +Lipschitz set and its boundary and denote +H(Ω × Γ) := H1(Ω) × H−1/2(Γ) +Our analysis will start from a variational formulation of (1), later referred to as the primary +formulation, that we write: find u ∈ H(Ω × Γ) such that +AΩ×Γ(u) = ℓΩ×Γ +(14) +where the bilinear map underlying the variational problem is written as a bounded linear +operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ assumed to systematically take the following form: +for any u, v ∈ H1(Ω) and p, q ∈ H−1/2(Γ), +Assumption: +⟨AΩ×Γ(u, p), (v, q)⟩ := ⟨AΩ(u), v⟩ + ⟨AΓ(u|Γ, p), (v|Γ, q)⟩ +(A1) +The map AΩ×Γ involves a volume part AΩ : H1(Ω) → H1(Ω)∗ that accounts for the Helmholtz +equation in the interior of the domain Ω. For µ ∈ C and κ : Ω → C an essentially bounded +5 + +measurable function, it is assumed of the following form +Assumptions: +⟨AΩ(u), v⟩ := +� +Ω µ−1∇u · ∇v − κ2uv dx, +with ℑm{κ(x)2} ≥ 0, ∀x ∈ Ω +supx∈Ω|κ(x)| < ∞ +ℜe{µ} > 0, ℑm{µ} ≥ 0. +(A2) +The assumptions above imply in particular that ℑm{⟨AΩ(u), u⟩} ≤ 0 ∀u ∈ H1(Ω). +The +operator AΩ×Γ also involves a pure boundary part AΓ that models boundary conditions, +AΓ : Hb(Γ) → Hb(Γ)∗ +where Hb(Γ) := H1/2(Γ) × H−1/2(Γ). +(15) +The boundary operator AΓ involves traces on Γ and is chosen in accordance with the boundary +conditions of our primary boundary value problem (1). We will need to rely on the following +additional assumptions +Assumptions: +i) ℑm{⟨AΓ(u), u⟩} ≤ 0 +∀u ∈ Hb(Γ) +ii) range(AΩ×Γ) is closed in H(Ω × Γ)∗. +(A3) +In the remaining of this contribution we will almost systematically take (A1)-(A2)-(A3) as +assumptions. We do not require that AΩ×Γ = A∗ +Ω×Γ. Let us underline that the assumptions +above are fulfilled by AΩ, AΓ, AΩ×Γ if and only if they are fulfilled by A∗ +Ω, A∗ +Γ, A∗ +Ω×Γ (recall +that adjunction does not involve any complex conjugation here). The last hypothesis in (A3) +implies (see e.g. [2, Thm.2.19]) +range(AΩ×Γ) = ker(A∗ +Ω×Γ)◦. +(16) +hence codim(range(AΩ×Γ)) = dim(ker(A∗ +Ω×Γ)). The source functional in (14) is assumed to +take the similar form ⟨ℓΩ×Γ, (v, q)⟩ := ⟨ℓΩ, v⟩+⟨ℓΓ, (v|Γ, q)⟩ where ⟨ℓΩ, v⟩ := +� +Ω fv dx for some +f ∈ L2(Ω) and ℓΓ ∈ Hb(Γ)∗ = H−1/2(Γ)×H+1/2(Γ) is chosen in accordance with the boundary +condition. +Now we consider concrete boundary conditions, exhibit corresponding appropriate choices of +AΓ and point how these situations fit the previous assumptions (A1)-(A2)-(A3). Here and in +the following, for the sake of conciseness, we shall take the notational convention (see (11)), +TΓ := TRd\Ω. +Example 3.1 (Dirichlet boundary condition). In the case of a Dirichlet boundary condi- +tion, we set AΓ(α, p) := (p, α) and ℓΓ := (0, g) for some g ∈ H1/2(Γ). We have ℑm{⟨AΓ(u), u⟩} = +0 for all u, which fits i) of (A3). Formulation (14) reduces to a variational formulation of a +Helmholtz problem with a Dirichlet condition imposed by means of a Lagrange parameter at +the boundary +u ∈ H1(Ω), p ∈ H−1/2(Γ) such that +� +Ω µ−1∇u · ∇v − κ2uv dx + +� +Γ pv dσ = +� +Ω fvdx +∀v ∈ H1(Ω), +� +Γ uq dσ = +� +Γ gq dσ +∀q ∈ H−1/2(Γ). +6 + +Whenever there is existence and uniqueness of the solution pair (u, p) then p = −nΩ · ∇u|Γ. +Conditions in (A2) guarantee that the volume part of this equation is coercive modulo the +compact term attached to κ. Hence the operator associated to this system is of Fredholm type +with index 0. In particular it has closed range, which fits ii) of (A3). +Example 3.2 (Neumann boundary condition). In the case of Neumann conditions, the +boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) := (0, T−1 +Γ p) and ℓΓ := (g, 0). Again +we have ℑm{⟨AΓ(u), u⟩} = 0 for all u, so this choice also matches i) of (A3). The primary +formulation (14) writes +u ∈ H1(Ω), p ∈ H−1/2(Γ) such that +� +Ω µ−1∇u · ∇v − κ2uv dx = +� +Ω fvdx + +� +Γ gvdσ +∀v ∈ H1(Ω), +� +Γ qT−1 +Γ p dσ = 0 +∀q ∈ H−1/2(Γ), +(17) +where u is decoupled from p. Actually we have in particular p = 0 and this variable is not +supposed to receive any particular interpretation. +Since T−1 +Γ +: H−1/2(Γ) → H1/2(Γ) is an +isomorphism, the operator AΩ×Γ associated to (17) is of Fredholm type with index 0. +Example 3.3 (Robin boundary condition). Consider a bounded linear map Λ : H1/2(Γ) → +H−1/2(Γ) that satisfies ℜe{⟨Λ(v), v⟩} > 0 ∀v ∈ H1/2(Γ)\{0} (as a typical example: Λ(v) = λv +with λ > 0). In this case again the boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) := +(−iΛα, T−1 +Γ p) and ℓΓ := (g, 0). +This choice of AΓ corresponds to the boundary condition +nΩ · ∇u|Γ − iΛ(u) = 0 on Γ. Formulation (14) writes +u ∈ H1(Ω), p ∈ H−1/2(Γ) such that +� +Ω µ−1∇u · ∇v − κ2uv dx − i +� +Γ vΛ(u)dσ = +� +Ω fvdx + +� +Γ gvdσ +∀v ∈ H1(Ω) +� +Γ qT−1 +Γ p dσ = 0 +∀q ∈ H−1/2(Γ) +which is a variant of (17) involving i +� +Γ vΛ(u)dσ as an additional term. Again p is decoupled +from the rest of the system and p = 0. Again the operator AΩ×Γ associated to this system is +of Fredholm type with index 0. +4 +Multi-domain functional setting +The boundary value problem (1) has been reformulated as an equivalent global variational +problem with (14). As we aim at extending an analytical framework for domain decomposition +by substructuration though, we are going to reshape Formulation (14), adapting it to a multi- +domain geometrical configuration. For this, we need to introduce notations adapted to domain +decomposition. Consider a decomposition into a collection of non-overlapping Lipschitz open +sets Ωj ⊂ Rd, j = 1, . . . , J that satisfy +Ω = Ω1 ∪ · · · ∪ ΩJ, +with Ωj ∩ Ωk = ∅ for j ̸= k. +(18) +Such a decomposition may very well admit a non-trivial wire-basket i.e. +the set of cross +points is non-empty, and we wish to underline that this situation is covered by the subsequent +analysis. We shall refer to the skeleton of the decomposition by +Σ := ∂Ω1 ∪ · · · ∪ ∂ΩJ. +(19) +7 + +Note that Γ = ∂Ω ⊂ Σ. We need to introduce notations for function spaces adapted to this +multi-domain setting. In this context, cartesian product spaces are probably the most natural, +so we set +Hb(Γ) := H +1 +2 (Γ) × H− 1 +2(Γ) +H(Ω) := Hb(Γ) × H1(Ω1) × · · · × H1(ΩJ) +H(Σ) := H +1 +2 (Γ) × H +1 +2(∂Ω1) × · · · × H +1 +2(∂ΩJ) +(20) +As cartesian products, these spaces are equipped with norms and duality pairings given by +(7). Apart from the boundary terms attached to Hb(Γ), the space H(Ω) should be understood +as functions defined over Ω, admitting potential jumps through interfaces. The space H(Σ) +consists in tuples of Dirichlet traces. Its dual is +H(Σ)∗ = H− 1 +2 (Γ) × H− 1 +2(∂Ω1) × · · · × H− 1 +2(∂ΩJ). +We need to introduce several operators acting in these spaces. First we shall consider the +operator T : H(Σ) → H(Σ)∗ defined as the block diagonal operator acting locally in each +subdomain +T := diag(TΓ, TΩ1, . . . , TΩJ) +where TΓ := TR\Ω +(21) +and each TΩj is defined with (11). The norms ∥ · ∥T and ∥ · ∥T−1 defined by (6) and (21) are +equivalent to ∥ · ∥H(Σ) and ∥ · ∥H(Σ)∗, which stems from the analogous property being satisfied +locally by each TΩj. These norms will play an important role in the subsequent analysis. Next +we introduce a boundary trace operator B : H(Ω) → H(Σ) and defined by +B := diag(BΓ, BΩ1, . . . , BΩJ) +where BΓ(α, p) := α +(22) +and each BΩj is the Dirichlet trace operator interior to subdomain Ωj as defined in (9). By +definition of T we have ∥B(u)∥T ≤ ∥u∥H(Ω) for all u ∈ H(Ω), since a similar inequality +is satisfied in each subdomain locally according to (13). We can also form a multi-domain +harmonic lifting map B† : H(Σ) → H(Ω) defined as the block-diagonal operator as follows +B† = diag(B† +Γ, B† +Ω1, . . . , B† +ΩJ) +where B† +Γ(α) := (α, 0) +(23) +and each B† +Ωj as defined in (10). +With this definition we have BB† = Id and B†B is an +orthogonal projector in H(Ω). Finally we also need to consider a restriction operator R : +H(Ω×Γ) → H(Ω) that embeds pairs (u, p) ∈ H(Ω×Γ) = H1(Ω)×H−1/2(Γ) into the cartesian +product H(Ω) by restricting locally to each subdomain +R(u, p) := ((u|Γ, p), u|Ω1, . . . , u|ΩJ) +for u ∈ H1(Ω), p ∈ H−1/2(Γ). +(24) +The image of this operator range(R) = R(H(Ω×Γ)) is a particular subspace of H(Ω) spanned +by tuples of functions that match through interfaces. This matching property is precisely +8 + +what characterizes Dirichlet transmission conditions through interfaces of the decomposition +(18). This is why we dedicate notations to this. +X(Ω) := {R(u, p), u ∈ H1(Ω), p ∈ H−1/2(Γ)} +X(Σ) := {B(u), u ∈ X(Ω)} +X(Σ)◦ := {p ∈ H(Σ)∗, ⟨p, v⟩ = 0 ∀v ∈ X(Σ)}. +(25) +A rapid inspection of the previous definitions shows that X(Σ) = {(u|Γ, u|∂Ω1, . . . , u|∂ΩJ), u ∈ +H1(Ω)} i.e. these are the tuples of Dirichlet traces that match through interfaces. The space +X(Σ) (resp. +X(Ω)) is a closed subspace of H(Σ) (resp. +H(Ω)) that encodes the Dirichlet +transmission conditions through interfaces, while X(Σ)◦ is a closed subspace of H(Ω)∗ that +encodes the Neumann transmission conditions. Indeed, considering restriction to interfaces in +the sense of distributions, +(v0, . . . , vJ) ∈ X(Σ)◦ =⇒ vj = +vk on Γj ∩ Γk, +(p0, . . . , pJ) ∈ X(Σ)◦ =⇒ pj = −pk on Γj ∩ Γk. +(26) +It is clear from these definitions that X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)}. +In particular +ker(B) ⊂ X(Ω). Recall the definition of polar sets given by (3). The following lemma is a +continuous counterpart to [6, Lem.2.1]. +Lemma 4.1. +i) ker(B)◦ = range(B∗) +ii) ker(B∗) = {0} +iii) X(Ω) = B−1(X(Σ)) +iv) X(Ω)◦ = B∗(X(Σ)◦) +Proof: +The first and second results are direct consequences of the surjectivity of the trace map +B : H(Ω) → H(Σ) combined with Theorem 4.7, 4.12 and 4.15 of [22]. The third result is a +rephrasing of X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)} in condensed form. To prove the last result, +first observe that B∗(X(Σ)◦) ⊂ X(Ω)◦ by routine verifications. +Now pick an arbitrary p ∈ X(Ω)◦. Since ker(B) ⊂ X(Ω) ⇒ X(Ω)◦ ⊂ ker(B)◦ = range(B∗), +there exists q ∈ H(Σ)∗ such that p = B∗q. For any v ∈ X(Σ), there exists u ∈ X(Ω) such +that v = B(u), which implies that ⟨q, v⟩ = ⟨p, u⟩ = 0. From this we conclude that q ∈ X(Σ)◦ +hence p ∈ B∗(X(Σ)◦), which proves X(Ω)◦ ⊂ B∗(X(Σ)◦). +□ +In Item iii) of the lemma above, B−1(X(Σ)) = {u ∈ H(Ω), B(u) ∈ X(Σ)} refers to a pre-image +(the operator B is obviously non-invertible i.e. +ker(B) ̸= {0}). +The following orthogonal +decomposition was established in [17, Prop.4.2]. +Proposition 4.2. +We have H(Σ)∗ = X(Σ)◦ ⊕ T(X(Σ)) and this decomposition is T−1-orthogonal. +The orthogonal decomposition of the previous result can be used to elaborate a characteriza- +tion of transmission conditions. The following result was established in [17, Prop.5.4]. +9 + +Proposition 4.3. +Let Q : H(Σ)∗ → H(Σ)∗ refer to the T−1-orthogonal projection onto T(X(Σ)). +Then the +operator Π := 2Q − Id is a T−1-isometric involution i.e. Π2 = Id, ∥Π(q)∥T−1 = ∥q∥T−1 for +all q ∈ H(Σ)∗. Moreover, for any pair (u, p) ∈ H(Σ) × H(Σ)∗, we have +(u, p) ∈ X(Σ) × X(Σ)◦ +⇐⇒ +−p + iT(u) = Π(p + iT(u)). +(27) +The characterization above relies on an exchange operator Π which is characteristic of Opti- +mized Schwarz Methods (OSM, see e.g. [1, Eq.37]) and ultra-weak variational formulations +(UWVF) see e.g. [3, Eq.1.19]. An explicit expression of this operator in terms of double layer +potentials attached to the equation −∆ + γ−2 was provided in [5, §5.2]. +5 +Multi-domain variational formulation +Using the notations introduced in the previous sections, we now rewrite the primary formula- +tion (14), decomposing it according to the subdomain partition (18). Pick u, v arbitrarily in +H1(Ω) and expand the integral coming into play in the definition (A2) of AΩ. This leads to +⟨AΩu, v⟩ = ⟨AΩ1(u|Ω1), v|Ω1⟩ + · · · + ⟨AΩJ(u|ΩJ), v|ΩJ⟩ +with +⟨AΩju, v⟩ := +� +Ωj +µ−1∇u · ∇v − κ2uv dx +(28) +In the expression above only u|Ωj, v|Ωj ∈ H1(Ωj) come into play in the term attached to +Ωj. The source term in (14) can be decomposed in a similar manner ℓΩ(v) = ℓΩ1(v|Ω1) + +. . . ℓΩJ(v|ΩJ). The above decompositions lead to introducing a block-diagonal operator A : +H(Ω) → H(Ω)∗ associated to these local bilinear forms i.e. defined by +A := diag(AΓ, AΩ1, . . . , AΩJ) +so that AΩ×Γ = R∗AR. +(29) +We have factorized the operator of our primary boundary value problem AΩ×Γ, and this +factorization is interesting from the perspective of domain decomposition because local sub- +problems are disconnected from one another in A. The following property is inherited from +the assumptions we made in §3 about AΩ×Γ, µ, κ and AΓ, +ℑm{⟨A(u), u⟩} ≤ 0 +∀u ∈ H(Σ). +(30) +We also need a unique solvability property for local problems with impedance boundary con- +dition. Because we do not make much specific assumptions regarding the boundary operator +AΓ, we take this further property as an assumption: +Assumption: +A − iB∗TB : H(Ω) → H(Ω)∗ +is an isomorphism. +(A4) +A notable consequence of (A2), (A3) and (A4) is that ker(A) ∩ ker(B) = {0}. Since A, T and +B are subdomain-wise block-diagonal, the assumption above is actually equivalent to imposing +that each AΩj − iB∗ +ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ and AΓ − iB∗ +ΓTΓBΓ : Hb(Γ) → Hb(Γ)∗ are +10 + +isomorphisms. +These conditions are fulfilled in many concrete circumstances. +As regards +interior contributions, for example, we have the following simple consequence of the unique +continuation principle. +Lemma 5.1. +Assume (A1)-(A2) and that µ, κ are constants (i.e. +do not depend on x). +Then for any +j = 1, . . . , J the operator AΩj − iB∗ +ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ is an isomorphism. +Proof: +Let us denote ω = Ωj for the sake of conciseness. According to (A2), there exists α > 0 +such that +α∥u∥2 +H1(ω) ≤ ℜe{⟨˜Aω(u), u⟩} +∀u ∈ H1(ω), +⟨˜Aω(u), v⟩ := ⟨(Aω − iB∗ +ωTωBω)u, v⟩ + +� +ω(1 + κ2)uvdx. +Applying Lax-Milgram’s lemma, we see that the operator ˜Aω : H(ω) → H(ω)∗ is an isomor- +phism hence, since it differs by a compact perturbation, that Aω−iB∗ +ωTωBω is of Fredholm type +with index 0, see e.g. [17, Chap.2]. There only remains to prove that ker(Aω − iB∗ +ωTωBω) = +{0}. Pick any u ∈ H1(ω) such that (Aω − iB∗ +ωTωBω)u = 0. Then we have +∥Bω(u)∥2 +Tω ≤ −ℑm{⟨(Aω − iB∗ +ωTωBω)u, u⟩} = 0. +From this we conclude that u|∂ω = Bω(u) = 0 hence Aω(u) = 0. On the other hand Aω(u) = +0 ⇒ nω · ∇u|∂ω = 0. There only remains to apply the unique continuation principle, see e.g. +Lemma 2.2 in [24], to conclude that u = 0 in ω. +□ +Regarding classical boundary conditions and the associated choice of AΓ, we can also examine +the invertibility of AΓ − iB∗ +ΓTΓBΓ. +Example 5.2 (Dirichlet condition). Taking the same notations as in Example 3.1, in this +situation we have the following expression (AΓ−iB∗ +ΓTΓBΓ)(α, p) = (p−iTΓα, α). We conclude +that AΓ − iB∗ +ΓTΓBΓ is continuously invertible with +(AΓ − iB∗ +ΓTΓBΓ)−1(p, α) = (α, p + iTΓα). +Example 5.3 (Neumann condition). Taking the same notations as in Example 3.2, we have +(AΓ − iB∗ +ΓTΓBΓ)(α, p) = (−iTΓα, T−1 +Γ p). We conclude that AΓ − iB∗ +ΓTΓBΓ is continuously +invertible with +(AΓ − iB∗ +ΓTΓBΓ)−1(p, α) = (iT−1 +Γ p, TΓα). +Example 5.4 (Robin condition). Taking the same notations as in Example 3.3, we have +(AΓ−iB∗ +ΓTΓBΓ)(α, p) = (−i(Λ+TΓ)α, T−1 +Γ p). Because ℜe{⟨Λ(α), α⟩} > 0 for all α ∈ H1/2(Γ), +we see that Λ+TΓ is coercive hence invertible and AΓ−iB∗ +ΓTΓBΓ is then continuously invertible +with +(AΓ − iB∗ +ΓTΓBΓ)−1(p, α) = (i(Λ + TΓ)−1p, TΓα). +Similarly to what precedes, define ℓ ∈ H(Ω)∗ by ⟨ℓ, v⟩ = ℓΓ(v0, q) + ℓΩ1(v1) + · · · + ℓΩJ(vJ), +and we have ℓΩ×Γ = R∗ℓ. The primary variational problem (14) can then rewritten by means +of A as follows: find u ∈ H(Ω × Γ) such that ⟨AR(u), R(v)⟩ = ⟨ℓ, R(v)⟩ for all v ∈ H(Ω × Γ). +Making use of the definition of X(Ω) as the image of R see (25), this also rewrites +u ∈ X(Ω) and +⟨A(u), v⟩ = ⟨ℓ, v⟩ ∀v ∈ X(Ω). +(31) +11 + +6 +Closed linear manifolds interpretation +Formulation (14) which is the starting point of this study, is not assumed to be a priori uniquely +solvable. The kernel of AΩ×Γ might be non-trivial. In many relevant applications though, it is +of Fredholm type, and this is why we are interested in studying how this Fredholmness carries +over in the multi-domain context. +For this we are going to consider the skew-symmetric +bilinear form [·, ·] : ( H(Σ) × H(Σ)∗)2 → C defined by +[(u, p), (v, q)] := ⟨u, q⟩ − ⟨v, p⟩ +u, v ∈ H(Σ), p, q ∈ H(Σ)∗. +(32) +This form is obviously non-degenerate and can be used as a duality pairing over the space of +tuples of Dirichlet-Neumann pairs of traces. Indeed denote +H (Σ) := H(Σ) × H(Σ)∗ +with norm: +∥(v, q)∥2 +T×T−1 := ∥v∥2 +T + ∥q∥2 +T−1 +then for any ϕ ∈ H (Σ)∗, there exists a unique u ∈ H (Σ) such that [u, v] = ϕ(v) ∀v ∈ H (Σ). +In other words, the pairing (32) puts H (Σ) in self-duality. We now introduce the subspace +of so-called Cauchy data that directly relates to the boundary value problem under study, +C (A) := {(B(u), p) | (u, p) ∈ H(Ω) × H(Σ)∗, Au = B∗p} +(33) +It must be understood as the space of tuples of Dirichlet-Neumann trace pairs stemming from +solutions to the problems local to each subdomain. If A : H(Ω) → H(Ω)∗ is an isomorphism, +we can define the associated Neumann-to-Dirichlet operator NtDA := BA−1B∗ and then +C (A) := {(NtDA(p), p) | p ∈ H(Σ)∗} appears to be the graph of it. On the other hand C (A) +is properly defined even if A fails to be invertible. +Lemma 6.1. +Assume (A1)-(A2)-(A3)-(A4). The application (v, p) → p − iT(v) continuously and isomor- +phically maps C (A) into H(Σ)∗ and, for all (v, p) ∈ C (A), satisfies the estimates +∥v∥2 +T + ∥p∥2 +T−1 ≤ ∥p − iTv∥2 +T−1 +1 +2∥p − iTv∥2 +T−1 ≤ ∥v∥2 +T + ∥p∥2 +T−1. +Proof: +It suffices to prove surjectivity and the estimates. To prove surjectivity, pick an arbitrary +q ∈ H(Σ)∗ and define u = (A − iB∗TB)−1B∗q. The pair (v, p) = (B(u), q + iTB(u)) satisfies +Au = B∗p so that (v, p) ∈ C (A) and, by construction, we have p − iTv = q. +To prove the estimates, pick an arbitrary pair (v, p) ∈ C (A). According to (33) there exists +u ∈ H(Ω) such that B(u) = v and A(u) = B∗(p), hence ⟨p, v⟩ = ⟨p, B(u)⟩ = ⟨B∗(p), u⟩ = +⟨A(u), u⟩. Taking account of (30), we deduce 0 ≤ ℜe{i⟨p, v⟩} ≤ ∥v∥2 +T +∥p∥2 +T−1 and conclude +0 ≤ ∥p − iTv∥2 +T−1 − (∥v∥2 +T + ∥p∥2 +T−1) ≤ ∥v∥2 +T + ∥p∥2 +T−1. +□ +In the previous lemma, the space of Cauchy data has been proven boundedly isomorphic to a +Hilbert space and, as such, is closed. +12 + +Corollary 6.2. +Assume (A1)-(A2)-(A3)-(A4). The subspace C (A) is closed in H (Σ). +The space of Cauchy data can be complemented in various ways. The next proposition exhibits +one possibility. +Proposition 6.3. +Assume (A1)-(A2)-(A3)-(A4). Define G (iT) := {(v, iT(v)), v ∈ H(Σ)}. Then +H (Σ) = C (A) ⊕ G (iT). +Proof: +First of all, assume that (u, p) ∈ C (A) ∩ G (iT). This that there exists v ∈ H(Ω) such +that Av = B∗p and Bv = u, and that p = iTu. Combining these equations yields (A − +iB∗TB)v = 0 hence v = 0 according to Lemma 5.1, and finally (u, p) = 0. We have proved +that C (A) ∩ G (iT) = {0}. +Now take an arbitrary (u, p) ∈ H(Σ) × H(Σ)∗. Since B : H(Ω) → H(Σ) is surjective, there +exists w ∈ H(Ω) such that B(w) = u. Define v ∈ H(Ω) by v = (A − iB∗TB)−1(Aw − B∗p) +which is valid a definition since A − iB∗TB : H(Ω) → H(Ω)∗ is an isomorphism according to +Lemma 5.1. We have in particular A(w − v) = B∗(p − iTBv). Set +u1 = B(v), +p1 = iTu1 = iTB(v), +u2 = B(w − v) = u − u1, +p2 = p − iTBv = p − p1. +(34) +By construction we have (u1, p1) ∈ G (iT). Moreover B(w −v) = u2 and A(w −v) = B∗p2 so +that (u2, p2) ∈ C (A). Finally, the second line in (34) indicates that (u, p) = (u1, p1)+(u2, p2) +which thus proves (u, p) ∈ C (A) + G (iT). We have just established that C (A) + G (iT) = +H(Σ) ⊕ H(Σ)∗ which ends the proof. +□ +The space G (iT) is simply the graph of the (bounded) operator iT : H(Σ) → H(Σ)∗. In the +present analysis, it plays a secondary role and shall be used only to prove results about C (A). +We have the following immediate result. +Lemma 6.4. +Define G (iT)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ G (iT)}. Then G (iT)♯ = G (iT). +The proof is definitely straightforward. This result means that G (iT) is its own polar set +under the pairing [·, ·]. As we see now, the space C (A) fulfills a similar property. +Proposition 6.5. +Assume (A1)-(A2)-(A3)-(A4). Define C (A)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ C (A)}. Then +C (A)♯ = C (A∗). +Proof: +First of all we have C (A∗) ⊂ C (A)♯. Indeed take any (u, p) ∈ C (A). By definition, there +exists w ∈ H(Ω) such that B(w) = u and Aw = B∗p. Then for any (u′, p′) ∈ C (A∗), since +B(w′) = u′ and A∗w′ = B∗p′ for some w′ ∈ H(Ω), we have +[(u, p), (u′, p′)] = ⟨u, p′⟩ − ⟨u′, p⟩ = ⟨B(w), p′⟩ − ⟨B(w′), p⟩ += ⟨w, B∗(p′)⟩ − ⟨w′, B∗(p)⟩ += ⟨w, A∗(w′)⟩ − ⟨w′, A(w)⟩ = 0. +13 + +Hence, to finish the proof, we need to show that C (A)♯ ⊂ C (A∗). For that, pick an arbitrary +u = (u, p) ∈ C (A)♯. The hypothesis of Section 3 hold for A∗ +Ω×Γ instead of AΩ×Γ, hence we +can apply Proposition 6.3 to A∗. This yields a decomposition u = u1 +u2 for some u1 ∈ C (A∗) +and some u2 ∈ G (iT). We have to prove that u2 = 0. By assumption we have +0 = [u, v] = [u1, v] + [u2, v] = [u2, v] +∀v ∈ C (A), +since C (A) ⊂ C (A∗)♯. Next Lemma 6.4 implies that 0 = [u2, v] = [u2, v + v′] for all v ∈ C (A) +and all v′ ∈ G (iT). Since C (A) ⊕ G (iT) = H (Σ) according to Proposition 6.3, we conclude +that 0 = [u2, w] ∀w ∈ H (Σ) hence finally u2 = 0. This shows that u = u1 ∈ C (A∗). We have +just established that C (A)♯ ⊂ C (A∗). +□ +We point that, because C (A) is closed, the previous result also implies that C (A) = C (A∗)♯. +Self-polarity appears to be a property of the following subspace (see Proposition 4.3) that is +pivotal in characterizing transmission conditions +X (Σ) := X(Σ) × X(Σ)◦. +Indeed we have X (Σ) = X (Σ)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ X (Σ)} by the very definition +of X (Σ), as X(Σ)◦◦ = X(Σ) since X(Σ) is a closed subspace of H(Σ) (see e.g. [22, Thm.4.7] +or [2, Prop.1.9]). The next result establishes an important connection between the two spaces +C (A), X (Σ) and our primary boundary value problem (14). +Proposition 6.6. +Assume (A1)-(A2)-(A3)-(A4). +The operator u �→ (BR(u), (B†)∗AR(u)) continuously and +isomorphically maps ker(AΩ×Γ) onto C (A) ∩ X (Σ). As a consequence +dim(ker(AΩ×Γ) ) = dim(C (A) ∩ X (Σ)). +Proof: +Let u ∈ H(Ω×Γ) satisfy AΩ×Γ(u) = 0. In particular R(u) ∈ X(Ω) and AR(u) ∈ X(Ω)◦, see +(24) and (31). According to iv) of Lemma 4.1, there exists p ∈ X(Σ)◦ such that AR(u) = B∗p +and it is unique since B∗ : H(Σ)∗ → H(Ω)∗ is injective. We have +(B†)∗AR(u) = (B†)∗B∗p = (BB†)∗p = p. +Setting v := B · R(u), by construction (v, p) ∈ C (A). +We also have v ∈ X(Σ) since +R(u) ∈ X(Ω), so that (v, p) ∈ X(Σ) × X(Σ)◦ = X (Σ). In addition, the formula (v, p) = +(BRu, (B†)∗ARu) establishes the continuous dependency of (v, p) on u. +Reciprocally, consider an arbitrary pair (v, p) ∈ C (A) ∩ X (Σ). Since (v, p) ∈ C (A), +there exists w ∈ H(Ω) such that Aw = B∗p and B(w) = v, and such a w is unique since +ker(A)∩ker(B) = {0}, according to Lemma 5.1. As v ∈ X(Σ), we have w ∈ X(Ω) = B−1(X(Σ)) +according to iii) of Lemma 4.1, so there exists u ∈ H(Ω × Γ) such that R(u) = w and such +a u is unique due to the injectivity of R : H(Ω × Γ) → H(Ω). This leads to AR(u) = B∗p +and p ∈ X(Σ)◦ ⇒ B∗p ∈ X(Ω)◦ = ker(R∗). Since X(Ω) = R(H(Ω × Γ)), we conclude that +0 = R∗AR(u) = AΩ×Γ(u). +□ +Lemma 6.7. +Assume (A1)-(A2)-(A3)-(A4). The operator (u, p) �→ R∗(B∗p − AB†u) continuously maps +(C (A∗) ∩ X (Σ))♯ into range(AΩ×Γ). +14 + +Proof: +Take an arbitrary (u, p) ∈ (C (A∗) ∩ X (Σ))♯ and set f = R∗(B∗p − AB†u). +Ap- +plying Proposition 6.6 to A∗ +Ω×Γ instead of AΩ×Γ shows that ϕ ∈ ker(A∗ +Ω×Γ) ⇒ (v, q) = +(BR(ϕ), (B†)∗A∗R(ϕ)) ∈ C (A∗) ∩ X (Σ). Hence ⟨f, ϕ⟩ = ⟨R∗(B∗p − AB†u), ϕ⟩ = ⟨p, BRϕ⟩ − +⟨u, (B†)∗A∗Rϕ⟩ = [(v, q), (u, p)] = 0. This proves f ∈ ker(A∗ +Ω×Γ)◦ = range(AΩ×Γ) according +to (16). +□ +Proposition 6.8. +Assume (A1)-(A2)-(A3)-(A4). Then C (A) + X (Σ) = (C (A∗) ∩ X (Σ))♯. In particular the +subspace C (A) + X (Σ) is closed in H (Σ). +Proof: +Clearly we have C (A) + X (Σ) ⊂ (C (A∗) ∩ X (Σ))♯, so we only need to establish that +(C (A∗) ∩ X (Σ))♯ ⊂ C (A) + X (Σ). Pick any pair (pd, pn) ∈ (C (A∗) ∩ X (Σ))♯. According to +Lemma 6.7 we have R∗(B∗pn − AB†pd) ∈ range(AΩ×Γ). Applying the definition of A given +by (29), there exists ϕ ∈ X(Ω) satisfying ⟨Aϕ, w⟩ = ⟨B∗pn − AB†pd, w⟩ for all ∀w ∈ X(Ω). +Set φ = ϕ + B†(pd) and ud = B(φ) = B(ϕ) + pd. +By construction, ⟨A(φ), w⟩ = +⟨pn, B(w)⟩ = 0 ∀w ∈ ker(B) ⊂ X(Ω), which rewrites A(φ) ∈ ker(B)◦. Applying i) of Lemma +4.1 we have Aφ = B∗un for some un ∈ H(Σ)∗. This implies in particular un = (BB†)∗un = +(B†)∗B∗un = (B†)∗Aφ. +We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A). On the other hand pd − +ud = −Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω) hence +⟨pn−un, w⟩ = ⟨Aφ, B†w⟩−⟨Aφ, B†w⟩ = 0, which implies pn−un ∈ X(Σ)◦. Finally (ud, un) ∈ +C (A) and (pd, pn) − (ud, un) ∈ X (Σ) imply that (pd, pn) ∈ C (A) + X (Σ). +□ +Corollary 6.9. +Assume (A1)-(A2)-(A3)-(A4). Then +codim(C (A) + X (Σ) ) = codim(range(AΩ×Γ) ). +Proof: +We have (C (A) + X (Σ))♯ = C (A)♯ ∩ X (Σ)♯ see e.g. +[2, Prop.2.14]. +According to +Proposition 6.5 applied to A∗, and since X (Σ)♯ = X (Σ) by construction, we conclude +that (C (A) + X (Σ))♯ = C (A∗) ∩ X (Σ). As the bilinear pairing [·, ·] is non-degenerate and +C (A) + X (Σ) is closed according to Proposition 6.8, we conclude codim(C (A) + X (Σ)) = +dim((C (A) + X (Σ))♯) = dim(C (A∗) ∩ X (Σ)). There only remains to apply Proposition 6.6 +to A∗ +Ω×Γ combined with (16). +□ +7 +Scattering operator +Proposition 6.6 and 6.8 and Corollary 6.9 above show that the kernel and the range of AΩ×Γ +are closely related to the pair of subspaces C (A), X (Σ). This can be exploited to study other +formulations of the same boundary value problem. +Proposition 7.1. +Assume (A1)-(A2)-(A3)-(A4). If u ∈ X(Ω) satisfies (31), then there exists a unique p ∈ H(Σ)∗ +such that the pair (u, p) satisfies +u ∈ H(Ω), p ∈ H(Σ)∗, +Au − B∗p = ℓ, +− p + iTBu = Π(p + iTBu). +(35) +15 + +Reciprocally if the pair (u, p) ∈ H(Ω) × H(Σ)∗ satisfies (35), then u satisfies (31). +Proof: +Assume first that u ∈ X(Ω) satisfies (31). This formulation rewrites equivalently as Au − +ℓ ∈ X(Ω)◦. Since X(Ω)◦ = B∗(X(Σ)◦) according to iv) Lemma 4.1, and as B∗ : H(Σ)∗ → H(Ω)∗ +is injective (B is surjective), there exists a unique p ∈ X(Σ)◦ such that Au − ℓ = B∗p. On +the other hand, u ∈ X(Ω) ⇒ B(u) ∈ X(Σ) according to iii) of Lemma 4.1. Finally applying +Proposition 4.3, we obtain −p + iTBu = Π(p + iTBu). +Reciprocally, assume that (35) holds. Then, according to Proposition 4.3, we have p ∈ +X(Σ)◦ and B(u) ∈ X(Σ). Moreover we have B(u) ∈ X(Σ) ⇒ u ∈ X(Ω) according to iii) +of Lemma 4.1. Since p ∈ X(Σ)◦, we have B∗p ∈ X(Ω)◦ so that, for any v ∈ X(Ω) we have +0 = ⟨B∗p, v⟩ = ⟨Au − ℓ, v⟩. To sum up, we have proved that u ∈ X(Ω) and ⟨Au, v⟩ = +⟨ℓ, v⟩ ∀v ∈ X(Ω). +□ +In a domain decomposition context, a substructuring strategy applied to Problem (14) nat- +urally leads to eliminating the volume unknowns in (35). This is performed by means of a +scattering map that takes ingoing traces as input and returns outgoing traces as output. +Proposition 7.2. +Assume (A1)-(A2)-(A3)-(A4). There exists a unique bounded linear map S : H(Σ)∗ → H(Σ)∗, +later referred to as scattering operator, satisfying +p + iTv = S(p − iTv) +∀(v, p) ∈ C (A). +(36) +It is also given by the formula S = Id+ 2iTB(A − iB∗TB)−1B∗. It is T−1-contractive and, for +any q ∈ H(Σ)∗, satisfies +∥S(q)∥2 +T−1 + 4|ℑm{⟨A(u), u⟩}| = ∥q∥2 +T−1 +where u = (A − iB∗TB)−1B∗q. +Proof: +We follow the proof pattern presented e.g. in [6, Lem.5.2]. First of all, Identity (36) clearly +and unambiguously defines the operator S as a linear map according to Lemma 6.1. Next, +pick an arbitrary q ∈ H(Σ)∗ and set u = (A − iB∗TB)−1B∗q and p = q + iTB(u). We have +Au − B∗p = 0 and q = p − iTB(u) and S(q) = p + iTB(u) = q + 2iTB(u), which leads +to S(q) = (Id + 2iTB(A − iB∗TB)−1B∗)q. Finally developing the squared norm, and taking +account of (30), we have +∥S(q)∥2 +T−1 = ∥p + iTB(u)∥2 +T−1 += ∥p − iTB(u)∥2 +T−1 + 4ℑm{⟨q, B(u)⟩} + 4∥B(u)∥2 +T += ∥q∥2 +T−1 + 4ℑm{⟨B∗(q), u⟩} + 4∥B(u)∥2 +T += ∥q∥2 +T−1 + 4ℑm{⟨A(u), u⟩} − 4ℑm{i⟨B∗TB(u), u⟩} + 4∥B(u)∥2 +T += ∥q∥2 +T−1 − 4|ℑm{⟨A(u), u⟩}| +□ +The space of Cauchy data was used to characterize the scattering operator. Reciprocally, the +scattering operator provides a characterization of the space of Cauchy data. The following +result should be compared with (27). +16 + +Lemma 7.3. +Assume (A1)-(A2)-(A3)-(A4). For any (v, p) ∈ H (Σ) we have: +(v, p) ∈ C (A) ⇐⇒ p + iTv = S(p − iTv). +Proof: +From the very definition of the scattering operator in Proposition 7.2, it is clear that +(v, p) ∈ C (A) ⇒ p + iTv = S(p − iTv). Reciprocally pick arbitrarily some (v, p) ∈ H (Σ) +such that p + iTv = S(p − iTv). We know from Proposition 6.3 that there exists v′ ∈ H(Σ) +such that (v − v′, p − iTv′) ∈ C (A) so applying Proposition 7.2 we obtain +(p − iTv′) + iT(v − v′) = S( (p − iTv′) − iT(v − v′) ) +⇐⇒ +p + iTv − 2iTv′ = S(p − iTv) +⇐⇒ +2iTv′ = 0 +=⇒ +v′ = 0. +□ +The scattering operator has a subdomain-wise block diagonal structure. This is clearly visible +from the formula S = Id + 2iTB(A − iB∗TB)−1B∗ where each term in the right hand side is +block diagonal. This yields +S = diag(SΓ, SΩ1, . . . , SΩJ) +where SΩj = Id + 2iTΩjBΩj(AΩj − iB∗ +ΩjTΩjBΩj)−1B∗ +Ωj +where SΓ = Id + 2iTΓBΓ(AΓ − iB∗ +ΓTΓBΓ)−1B∗ +Γ +Let us discuss the particular form that takes the boundary scattering operator SΓ for Dirichlet, +Neumann and Robin conditions. Recall that BΓ : Hb(Γ) := H1/2(Γ) × H−1/2(Γ) → H1/2(Γ) is +defined by BΓ(α, p) = α hence B∗ +Γ(p) = (p, 0). +Example 7.4 (Dirichlet condition). Taking the same notations as in Example 3.1 and 5.2, +since B∗ +Γp = (p, 0) for all p ∈ H−1/2(Γ), we conclude that BΓ(AΓ − iB∗ +ΓTΓBΓ)−1B∗ +Γ = 0 and +finally +SΓ = +Id. +Example 7.5 (Neumann condition). Taking the same notations as in Example 3.2 and +5.3, in this situation we have BΓ(AΓ − iB∗ +ΓTΓBΓ)−1B∗ +Γ = iT−1 +Γ . This yields the expression +SΓ = −Id. +Example 7.6 (Robin condition). Taking the same notations as in Example 3.3 and 5.4, in +this situation we have BΓ(AΓ − iB∗ +ΓTΓBΓ)−1B∗ +Γ = i(Λ + TΓ)−1 which yields +SΓ = (Λ − TΓ)(Λ + TΓ)−1. +8 +Skeleton formulation +Now we shall use the scattering operator of the previous section to transform further the +boundary value problem (35). Once volume unknowns have been eliminated, this reduces to +an equation involving only traces on the skeleton of the subdomain partition. +17 + +Proposition 8.1. +Assume (A1)-(A2)-(A3)-(A4). Define f ∈ H(Σ)∗ by f = −2iΠTB(A−iB∗TB)−1ℓ. If (u, p) ∈ +H(Ω) × H(Σ)∗ solves (35), then q = p − iTB(u) satisfies the skeleton problem +q ∈ H(Σ)∗ and +(Id + ΠS)q = f. +(37) +Reciprocally if q satisfies the above equation then the pair (u, p) ∈ H(Ω) × H(Σ)∗, given by +u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), solves (35). +Proof: +If (u, p) ∈ H(Ω) × H(Σ)∗ solves (35) and q = p − iTB(u), then (A − iB∗TB)u = B∗(p − +iTBu) + ℓ. Left multiplying this equality by 2iTB(A − iB∗TB)−1 yields an expression for +2iTB(u) that can be used in p+iTB(u) = q+2iTB(u) in the last line of (35). This eventually +leads to (37). +Reciprocally if q solves (37) and u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), then we +have Au = B∗(q + iTBu) + ℓ = B∗p + ℓ. On the other hand, using the expression of f and +S, the skeleton equation in (37) writes +q + Π(q + 2iTB(A − iB∗TB)−1(B∗q + ℓ)) = 0 +⇐⇒ +q + Π(q + 2iTB(u)) = 0 +⇐⇒ +p − iTB(u) + Π(p + iTB(u)) = 0 +This finally proves that the pair (u, p) satisfies (35) +□ +Next we investigate whether or not the skeleton formulation (8.1) is uniquely solvable. We +will show that this is directly correlated to the unique solvability of (14). +Proposition 8.2. +Assume (A1)-(A2)-(A3)-(A4). +The application (v, p) �→ p − iT(v) induces a continuous +isomorphism from C (A) ∩ X (Σ) onto ker(Id + ΠS). As a consequence +dim( ker(Id + ΠS) ) = dim( ker(AΩ×Γ) ). +Proof: +First of all, if (v, p) ∈ C (A) ∩ X (Σ), then p + iTv = S(p − iTv) according to Lemma +7.3, and p − iTv = −Π(p + iTv) according to (27). Combining these two identities leads to +p − iTv ∈ ker(Id + ΠS). Next if (v, p) ∈ C (A) ∩ X (Σ) and p − iTv = 0, then (v, p) = (0, 0) +according to Lemma 6.1 hence the injectivity. +Finally if q ∈ ker(Id + ΠS), then there exists (v, p) ∈ C (A) unique such that p − iTv = q +according to Lemma 6.1, and applying (36), we obtain S(q) = S(p−iTv) = p+iTv. From this +later identity and (Id+ΠS)q = 0 leads to −p+iTv = Π(p+iTv) which implies (v, p) ∈ X (Σ) +according to Proposition 4.3. Hence we conclude (v, p) ∈ C (A) ∩ X (Σ). +□ +Proposition 8.3. +Assume (A1)-(A2)-(A3)-(A4). The subspace range(Id + ΠS) is closed in H(Σ)∗. +18 + +Proof: +Define Θ : H(Σ)∗ → H (Σ) by Θ(q) := (iT−1(q), q), which satisfies 2∥q∥2 +T−1 = ∥Θ(q)∥2 +T×T−1 +for all q ∈ H(Σ)∗. Taking account that C (A) + X (Σ) is closed, see Proposition 6.8, we are +going to prove that +range(Id + ΠS) = Θ−1(C (A) + X (Σ)). +Take any p ∈ range(Id + ΠS). Applying Lemma 6.1, there exists a unique (v, q) ∈ C (A) such +that 2p = (Id + ΠS)(q − iTv). Since S(q − iTv) = q + iTv according to Proposition 7.2, and +writing 2p = (Id + Π)p + (Id − Π)p, we obtain +(Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv) +⇐⇒ +(Id + Π)p + (Id − Π)p = (Id + Π)q − (Id − Π)(iTv) +⇐⇒ +(Id + Π)(p − q) = −(Id − Π)(p + iTv). +As (Id ± Π)/2 are two mutually orthogonal projectors, see Proposition 4.3, we deduce on +the one hand that (Id + Π)(p − q) = 0 and (Id − Π)(p + iTv) = 0. This eventually leads +to p − q ∈ X(Σ)◦ and p + iTv ∈ T(X(Σ)) +⇐⇒ +iT−1p − v ∈ X(Σ). We conclude that +Θ(p) − (v, q) ∈ X (Σ). Hence Θ(p) ∈ C (A) + X (Σ). +Reciprocally pick an arbitrary p ∈ Θ−1(C (A)+X (Σ)). This means that Θ(p)−(v, q) ∈ X (Σ) +for some (v, q) ∈ C (A). As a consequence (Id − Π)(p + iTv) = 0 and (Id + Π)(p − q) = 0. +Adding these two equations, and taking account that q +iTv = S(q −iTv) according to (36), +leads to +(Id + Π)(p − q) = −(Id − Π)(p + iTv) +⇐⇒ +(Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv) +⇐⇒ +p = (Id + ΠS)(q − iTv). +□ +Proposition 8.4. +Assume (A1)-(A2)-(A3)-(A4). Then +codim( range(Id + ΠS) ) = codim( range(AΩ×Γ) ). +Proof: +Since range(Id+ΠS) is closed according to Proposition 8.3, we deduce that codim( range(Id+ +ΠS) ) = dim( ker((Id + ΠS)∗) ). Proposition 4.3, in particular the characterization of Q = +(Id + Π)/2 as a T−1-orthogonal projection, show that Π2 = Id and Π∗ = T−1ΠT, so we have +(Id + ΠS)∗ = (TΠ∗)−1(Id + ΠTS∗T−1)TΠ∗. +Setting ˜S := TS∗T−1, and noting that TΠ∗ : H(Σ) → H(Σ)∗ is an isomorphism, we have +dim( ker((Id + ΠS)∗) ) = dim( ker(Id + Π˜S) ). Let us have a close look at ˜S, taking account of +the formulas given by Proposition 7.2. Since T∗ = T, we obtain +˜S = Id + 2iTB(A∗ − iB∗TB)−1B∗. +We see that ˜S differs from S only in that A is replaced by A∗. As a consequence, we can apply +Proposition 8.2, replacing AΩ×Γ with A∗ +Ω×Γ. Using (16), this yields dim( ker(Id + Π˜S) ) = +dim( ker(A∗ +Ω×Γ) ) = codim( range(AΩ×Γ) ). +□ +19 + +If V1, V2 are Banach spaces, a bounded linear map L : V1 → V2 is of Fredholm type if and +only if range(L) is closed in V2, dim( ker(L) ) < ∞ and codim( range(L) ) < ∞. In this case +the index of L is the number index(L) := dim( ker(L) ) − codim( range(L) ). The results of the +present paragraph (in particular Proposition 8.2, 8.3 and 8.4) lead to the following corollary. +Corollary 8.5. +Assume (A1)-(A2)-(A3)-(A4). The operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ is of Fredholm +type if and only if Id + ΠS : H(Σ)∗ → H(Σ)∗ is of Fredholm type and, in this case, both +operators have the same index. +9 +Coercivity estimate +Now we study quantitatively how the inf-sup constant of Id+ΠS relates to the inf-sup constant +of the operator AΩ×Γ. Taking the cue from [6, §8], we first establish an intermediate result. +Recall that inf-sup constants are defined according to (4). +Proposition 9.1. +Assume (A1)-(A2)-(A3)-(A4). Then +infsup +H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥) +inf +u∈C (A)\{0} +v∈X (Σ)\{0} +∥u + v∥T×T−1 +∥u∥T×T−1 +where +∥A∥ := +sup +u,v∈H(Ω)\{0} +|⟨u, A(v)⟩| +∥u∥H(Ω)∥v∥H(Ω) +. +Proof: +In the case where C (A)∩X (Σ) ̸= {0}, the inf-sup constant vanishes since ker(AΩ×Γ) ̸= {0} +according to Proposition 6.6. So the estimate is automatically satisfied in this case. We shall +assume C (A) ∩ X (Σ) = {0}. According to Proposition 6.6 this leads to +ker(AΩ×Γ) ̸= {0} +α := +infsup +H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) > 0. +(38) +Now pick any u ∈ C (A) \ {0} and any v ∈ X (Σ) \ {0}, and set (pd, pn) := u + v ∈ H (Σ) = +H(Σ)×H(Σ)∗. The invertibility of AΩ×Γ provides the existence of a unique ϕ ∈ X(Ω) satisfying +⟨A(ϕ), w⟩ = −⟨AB†(pd), w⟩ + ⟨pn, B(w)⟩ for all w ∈ X(Ω). In particular +α ∥ϕ∥H(Ω) ≤ ∥A∥ ∥pd∥T + ∥pn∥T−1. +(39) +Set φ = ϕ+B†(pd) and ud = B(φ) = B(ϕ)+pd. By construction, for any w ∈ H(Ω) satisfying +B(w) = 0 we have ⟨A(φ), w⟩ = ⟨pn, B(w)⟩ = 0, which rewrites A(φ) ∈ ker(B)◦. Applying +i) of Lemma 4.1 we have Aφ = B∗un for some un ∈ H(Σ)∗. +This implies in particular +un = (BB†)∗un = (B†)∗B∗un = (B†)∗Aφ. From the previous definitions, and the fact that +∥B(w)∥T ≤ ∥w∥H(Ω) and ∥B†(q)∥H(Ω) = ∥q∥T, we obtain the estimates +∥φ∥H(Ω) ≤ ∥ϕ∥H(Ω) + ∥pd∥T +∥ud∥T ≤ ∥φ∥H(Ω) +∥un∥T−1 ≤ ∥A∥ ∥φ∥H(Ω). +(40) +20 + +We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A) by construction. On the other hand +we have pd − ud = Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω) +hence ⟨pn − un, w⟩ = ⟨Aφ, B†w⟩ − ⟨Aφ, B†w⟩ = 0, which implies that pn − un ∈ X(Σ)◦. +Finally we have shown that (ud, un) ∈ C (A) and (pd, pn) − (ud, un) ∈ X (Σ) and, since +p = u + v ∈ C (A) ⊕ X (Σ), we conclude that u = (ud, un). There only remains to combine +(39) and (40) to obtain the desired estimate. +□ +Theorem 9.2. +Assume (A1)-(A2)-(A3)-(A4). Then +infsup +H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥) +infsup +H(Σ)∗→H(Σ)∗(Id + ΠS). +Proof: +In the case where ker(AΩ×Γ) ̸= {0} we also have ker(Id + ΠS) ̸= {0} according to Propo- +sition 8.2 and, in this situation, the desired estimate is satisfied, with both sides of the es- +timate equal to 0. Hence we can assume that ker(AΩ×Γ) = {0} and in this situation both +AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ and Id + ΠS : H(Σ) → H(Σ)∗ are are injective with closed +range. Pick an arbitrary f ∈ H(Σ)∗. According to Lemma 6.1, there exists a unique pair +u = (ud, un) ∈ C (A) such that f = un − iT(ud) and we have ∥f∥T−1 ≤ +√ +2∥u∥T×T−1 which +re-writes as +∥u∥T×T−1 +∥f∥T−1×T +≥ +1 +√ +2 +. +Next set g = (Id + ΠS)f and p = (pd, pn) = (T−1(g), −ig)/2. +We have in particular +∥g∥T−1 = +√ +2∥p∥T×T−1. Since S(f) = S(un−iT(ud)) = un+iT(ud) according to Proposition +7.2, we obtain +un − iT(ud) + Π(un + iT(ud)) = f + ΠS(f) += g = (Id + Π)g/2 + (Id − Π)g/2 += (Id + Π)pn − i(Id − Π)T(pd) += pn − iT(pd) + Π(pn + iT(pd)) +Re-arranging the terms in the equality above so as to move all contributions involving Π in +the right hand side, we obtain −(pn − un) + iT(pd − ud) = Π((pn − un) + iT(pd − ud)). +According to Proposition 4.3, this implies that (pd, pn) − (ud, un) ∈ X (Σ). Since we have +(ud, un) ∈ C (A) by construction, we can apply Proposition 9.1 which yields +∥(Id + ΠS)f∥T−1 +∥f∥T−1 += ∥g∥T−1 +∥f∥T−1 ≥ ∥p∥T×T−1 +∥u∥T×T−1 ≥ +infsup +H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ)/(1 + ∥A∥). +This establishes the desired estimate, since this holds for any f ∈ H(Σ)∗. +□ +The estimate provided by Theorem 9.2 is remarkable in several respects. First of all it holds +even if ker(AΩ×Γ) is non-trivial. Secondly it does not involve any hidden “C > 0” constant. +In particular it does not involve any frequency dependency, although the infsup constant of +AΩ×Γ a priori depends itself on the frequency. This means that, to estimate the frequency +dependency of the infsup constant of Id + ΠS, it suffices to derive such an estimate for AΩ×Γ. +A further striking feature is that the number of subdomains J does not come into play in this +estimate. +21 + +As an interesting additional result in the perspective of an effective linear solve, the contrac- +tivity of Π and S leads to the coercivity of the operator Id + ΠS. The next result can be +combined with Theorem 9.2 to obtain an effective estimate of the coercivity constant. +Corollary 9.3. +Assume (A1)-(A2)-(A3)-(A4). Then Id + ΠS : H(Σ)∗ → H(Σ)∗ is coercive with respect to the +scalar product induced by T−1 and we have +inf +q∈H(Σ)∗\{0} +ℜe{⟨(Id + ΠS)q, T−1q⟩} +∥q∥2 +T−1 +≥ 1 +2 +� +infsup +H(Σ)∗→H(Σ)∗(Id + ΠS) +�2. +Proof: +For any q ∈ H(Σ)∗, +∥q∥2 +T−1 ≥ ∥ΠS(q)∥2 +T−1 = ∥(Id + ΠS)q − q∥2 +T−1 +∥q∥2 +T−1 ≥ ∥ΠS(q)∥2 +T−1 = ∥(Id + ΠS)q∥2 +T−1 + ∥q∥2 +T−1 − 2ℜe{⟨(Id + ΠS)q, T−1q⟩} +=⇒ +ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2 +T−1 ≥ +� +∥(Id + ΠS)q∥T−1/∥q∥T−1 +�2/2. +□ +We conclude this article illustrating how the previous results lead to estimations of the coer- +civity constant of the skeleton operator for a concrete case. +Example 9.4. +Consider the case Rd = R2 or R3. Assume that µ = 1, κ = k ∈ (0, +∞), and choose AΓ as in +Example 3.3 with ⟨Λ(u), v⟩ = k +� +Γ uvdσ which models the Robin condition ∂nu − iku = 0 on +Γ. So we. Assume in addition that Ω is a convex polyhedron. Then we have +⟨AΩ×Γ(u, p), (v, q)⟩ = +� +Ω +∇u∇v − k2uvdx − ik +� +Γ +uvdσ + +� +Γ +qTΓp dσ. +Let us take γ = 1/k for the parameter involved in (8). From these choices, and proceeding +like in [15, Lem.2.4] for dealing with boundary terms on Γ, we see that the continuity modulus +∥A∥ (as defined in Proposition 9.1) can be bounded independently of k. On the other hand, +we know from [18] that +infsup +H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≥ +O +k→∞(1/k). +We can now plug this estimate into Theorem 9.2, and we see that the inf-sup constant of +Id + ΠS admits also a lower bound that behaves like O(1/k) for k → ∞. Finally combining +with Corollary 9.3, we see that the coercivity constant of the skeleton formulation behaves like +O(1/k2) i.e. +inf +q∈H(Σ)∗\{0} ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2 +T−1 ≥ +O +k→∞(1/k2). +References +[1] A. Bendali and Y. Boubendir. Non-overlapping domain decomposition method for a nodal +finite element method. Numerische Mathematik, 103(4):515–537, Jun 2006. +22 + +[2] H. Brezis. Functional analysis, Sobolev spaces and partial differential equations. Univer- +sitext. Springer, New York, 2011. +[3] O. Cessenat and B. Despres. Application of an ultra weak variational formulation of ellip- +tic PDEs to the two-dimensional Helmholtz problem. SIAM J. Numer. Anal., 35(1):255– +299, 1998. +[4] P.G. Ciarlet. Introduction to numerical linear algebra and optimization. Camb. Texts +Appl. Math. Cambridge etc.: Cambridge University Press, 1988. +[5] X. Claeys. +Non-local variant of the Optimised Schwarz Method for arbitrary non- +overlapping subdomain partitions. ESAIM: M2AN, 55(2):429–448, 2021. +[6] X. Claeys. Nonselfadjoint impedance in Generalized Optimized Schwarz Methods. IMA +Journal of Numerical Analysis, November 2022. +[7] X. Claeys, F. Collino, and E. Parolin. Nonlocal optimized schwarz methods for time- +harmonic electromagnetics. Adv. Comput. Math., 48(6):Paper No. 72, 2022. +[8] X. Claeys and E. Parolin. Robust treatment of cross-points in optimized Schwarz methods. +Numer. Math., 151(2):405–442, 2022. +[9] F. Collino, S. Ghanemi, and P. Joly. Domain decomposition method for harmonic wave +propagation: a general presentation. Computer Methods in Applied Mechanics and En- +gineering, 184(2):171 – 211, 2000. +[10] B. Després. Méthodes de décomposition de domaine pour les problèmes de propagation +d’ondes en régime harmonique. Le théorème de Borg pour l’équation de Hill vectorielle. +Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquen- +court, 1991. Thèse, Université de Paris IX (Dauphine), Paris, 1991. +[11] B. Després, A. Nicolopoulos, and B. Thierry. Optimized transmission conditions in do- +main decomposition methods with cross-points for Helmholtz equation. SIAM J. Numer. +Anal., 60(5):2482–2507, 2022. +[12] M. Gander and F. Kwok. On the applicability of Lions’ energy estimates in the analysis +of discrete optimized schwarz methods with cross points. Lecture Notes in Computational +Science and Engineering, 91, 01 2013. +[13] M.J. Gander and K. Santugini. Cross-points in domain decomposition methods with a +finite element discretization. Electron. Trans. Numer. Anal., 45:219–240, 2016. +[14] M.J. Gander and H. Zhang. +A class of iterative solvers for the Helmholtz equation: +factorizations, sweeping preconditioners, source transfer, single layer potentials, polarized +traces, and optimized Schwarz methods. SIAM Rev., 61(1):3–76, 2019. +[15] I.G. Graham, E.A. Spence, and J.Zou. Domain decomposition with local impedance con- +ditions for the Helmholtz equation with absorption. SIAM J. Numer. Anal., 58(5):2515– +2543, 2020. +[16] T. Kato. Perturbation theory for linear operators. Classics in Mathematics. Springer- +Verlag, Berlin, 1995. Reprint of the 1980 edition. +23 + +[17] W. McLean. Strongly elliptic systems and boundary integral equations. Cambridge: Cam- +bridge University Press, 2000. +[18] J. M. Melenk. On generalized finite-element methods. ProQuest LLC, Ann Arbor, MI, +1995. Thesis (Ph.D.)–University of Maryland, College Park. +[19] A. Modave, A. Royer, X. Antoine, and C. Geuzaine. A non-overlapping domain decom- +position method with high-order transmission conditions and cross-point treatment for +Helmholtz problems. Comput. Methods Appl. Mech. Eng., 368:23, 2020. Id/No 113162. +[20] E. Parolin. Non-overlapping domain decomposition methods with non-local transmission- +operators for harmonic wave propagation problems. Theses, Institut Polytechnique de +Paris, December 2020. +[21] C. Pechstein. Finite and boundary element tearing and interconnecting solvers for mul- +tiscale problems, volume 90 of Lecture Notes in Computational Science and Engineering. +Springer, Heidelberg, 2013. +[22] W. Rudin. Functional analysis. 2nd ed. New York, NY: McGraw-Hill, 2nd ed. edition, +1991. +[23] O. Steinbach. +Numerical approximation methods for elliptic boundary value problems. +Springer, New York, 2008. +Finite and boundary elements, Translated from the 2003 +German original. +[24] T. von Petersdorff. Boundary integral equations for mixed Dirichlet, Neumann and trans- +mission problems. Math. Methods Appl. Sci., 11(2):185–213, 1989. +24 + diff --git a/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/load_file.txt b/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..98b86736575bc22c28a8b5b2530bdd841bf73c12 --- /dev/null +++ b/MNE1T4oBgHgl3EQfHAOD/content/tmp_files/load_file.txt @@ -0,0 +1,912 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf,len=911 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='02921v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='AP] 7 Jan 2023 Non-local optimized Schwarz method with physical boundaries X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='Claeys1 1Sorbonne Université, Laboratoire Jacques-Louis Lions Abstract We extend the theoretical framework of non-local optimized Schwarz methods as in- troduced in [Claeys,2021], considering an Helmholtz equation posed in a bounded cavity supplemented with a variety of conditions modeling material boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The problem is reformulated equivalently as an equation posed on the skeleton of a non-overlapping parti- tion of the computational domain, involving an operator of the form "identity + contrac- tion".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The analysis covers the possibility of resonance phenomena where the Helmholtz problem is not uniquely solvable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In case of unique solvability, the skeleton formulation is proved coercive, and an explicit bound for the coercivity constant is provided in terms of the inf-sup constant of the primary Helmholtz boundary value problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Introduction Large scale simulation of harmonic wave propagation phenomena remains a challenge in the context of which one of the most effective substructuring domain decomposition methods (DDM) was introduced by Després [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Commonly referred to as Optimized Schwarz Method (OSM), it consists in local solves of the wave equation, maintaining a coupling between sub- domains through a reformulation of transmission conditions in terms of ingoing and outgoing Robin traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The new transmission conditions involve an exchange operator that swaps traces from both sides of each interface between neighboring subdomains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This approach was put in a general theoretical framework in [9] and we point to [14] for an overview of this type of strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In a discrete setting, the appropriate definition of the exchange operator raises issues at cross-points, where at least three degrees of freedom have to communicate, because it is then unclear what should be the discrete counterpart of swapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Although several heuristics had been proposed in the literature for dealing with this situation [12, 13, 19, 11, 1], most strategies based on this local swapping operator experienced deteriorated performance in the presence of cross points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In a series of articles [5, 6, 7, 8], we proposed a variant of OSM where the usual local swap- ping exchange operator is replaced by an alternative a priori non-local operator that naturally accommodates the presence of cross-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This new approach can cope with arbitrary sub- domain partitions, with a possibly very complicated wire basket.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In [5], we analyzed this new approach at the continuous level considering a transmission problem posed on the full space 1 Rd, and the formulation associated to this new DDM strategy was proved strongly coercive, which paved the way to convergence estimates for linear solvers (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Richardson, GMRes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This novel approach was adapted to a finite element discretised setting and a full conver- gence theory was developed in [8, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In passing, this new theoretical framework covered the case of the original Després algorithm hence offering a genuine generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The whole the- ory was confirmed by numerical results both in 2D and 3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' While the previous developments were concerned with scalar harmonic wave propagation, the case of Maxwell’s equations was considered in [7, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In the present contribution we extend the theory of [5] in several directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' First of all, while [5] considered only the case of a transmission problem posed on the whole of Rd, we consider here the case of a cavity problem posed in a bounded domain Ω ⊂ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This boundary value problem takes the form div(µ−1∇u) + κ2u = −f in Ω + boundary condition on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (1) Here again we reformulate it as an equation in terms of traces posed on the skeleton of the subdomain partition, which we call skeleton formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' While in previous contributions the problem had been assumed uniquely solvable (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [8, §1] or [6, §1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2]), the analysis is here extended so as to cover the case where (1) is not necessarily uniquely solvable which covers the case of non-trivial resonance phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The skeleton formulation is then proved uniquely solvable if and only if this holds for (1) and, if this condition is fulfilled, the skeleton formulation is proved to be strongly coercive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Although coercivity was already established in [5], we provide in addition an explicit estimate of the coercivity constant in terms of the inf-sup condition of the primary variational formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Our whole analysis rests on an interpretation of the properties of (1) in terms of a pair of two closed linear manifolds: one that models transmission conditions, and another one that models local wave equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Studying properties of operators by means of pairs of closed linear manifolds follows the spirit of [16, iv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4 & iv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Like [5], the present contribution is purely theoretical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' It aims at laying solid analytical foundations for a better understanding of the spectral properties of the skeleton formulation, which is important in the perspective of devising both computationally efficient eigensolvers and domain decomposition preconditionners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We do not provide any numerical experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Such results shall be presented in a forthcoming contribution that will develop a discrete variant of the present analysis, in the spirit of [8, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The outline of this article is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In the first two sections we introduce general notations for both Hilbert analysis and Sobolev spaces, including trace operators, Dirichlet-to-Neumann maps and harmonic liftings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next we describe the problem under study, specifying precisely the assumptions underlying our analysis, which allows in particular to deal with a variety of boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' How to apply this framework for common boundary conditions is illustrated with examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Further notations are introduced for dealing with multi-domain configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This leads in particular to a characterization of transmission conditions based on a non-local exchange operator, see Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, which had been an important innovation of [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We use this multi-domain formalism to re-express the boundary value problem under study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The kernel and the range of this operator are then re-interpreted in terms of a pair of closed linear manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' One manifold models wave equations local to each subdomain, and 2 the other one models transmission conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Wave equations local to each subdomain are then re-expressed by means of a so-called scattering operator, which we use to finally provide a formulation involving tuples of Robin traces on the skeleton of the subdomain partition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This skeleton formulation is proved to systematically admit closed range, and its kernel is put in correspondence with the kernel of the original formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally we prove strong coercivity for the skeleton formulation and derive an estimate for the coercivity constant that is explicit with respect to the inf-sup constant of the original variational formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 1 General notation conventions We first set a few general notation conventions regarding analysis in Banach spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' All vector spaces that we are going to consider have C as scalar field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assuming that H is a Banach space equipped with the norm ∥ · ∥H, its topological dual denoted H∗ will systematically be equipped with the norm ∥ϕ∥H∗ = sup v∈H\\{0} |ϕ(v)| ∥v∥H .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (2) The canonical duality pairing will be systematically denoted ⟨·, ·⟩ : H∗×H → C and defined by ⟨ϕ, v⟩ := ϕ(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Although the space H does not appear explicitly in the notation "⟨ϕ, v⟩", when such pairing angle brackets are used, it shall be clear from the context which pair of spaces (H, H∗) is under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We emphasize that the duality pairings we consider do not involve any complex conjugation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We shall write ⟨v, ϕ⟩ = ⟨ϕ, v⟩ ∀v ∈ H, ϕ ∈ H∗ indifferently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For any subset X ⊂ H, we denote its polar set by X◦ := {ϕ ∈ H∗, ⟨ϕ, v⟩ = 0 ∀v ∈ X}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (3) Assuming that V is another Banach space equipped with the norm ∥ · ∥V, and L : H → V is a bounded linear map, we shall refer to its inf-sup constant denoted and defined as follows infsup H→V (L) := inf u∈H\\{0} ∥L(u)∥V ∥u∥H (4) In the case where L is invertible, this inf-sup constant equals the inverse to the continuity modulus of L−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The inf-sup constant is well defined even if L is not invertible though.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The adjoint to the map L : H → V shall be defined as the unique bounded linear map L∗ : V∗ → H∗ satisfying ⟨L∗(p), u⟩ := ⟨p, L(u)⟩ (5) for all p ∈ V∗ and all u ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Once again, we insist that no complex conjugation comes into play in (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The bounded linear map L induces another bounded linear map L : H → V defined by L(u) := L(u) for all u ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' A bounded linear operator T : H → H∗ is called self-adjoint if T = T∗ and, in this case we have ⟨T(u), u⟩ ∈ R for all u ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' It is called positive definite if ⟨T(u), u⟩ ∈ (0, +∞) for all u ∈ H\\{0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' If T is both self-adjoint and positive definite, the sesquilinear form u, v �→ ⟨T(u), v⟩ induces a scalar product over H and the associated norm is denoted ∥u∥T := � ⟨T(u), u⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (6) 3 We shall also consider cartesian products H1 × · · · × HJ where each Hj is a Banach space equipped with the norm ∥ · ∥Hj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then the cartesian product shall be equipped with the following canonical norm and duality pairings ∥v∥2 H1×···×HJ := ∥v1∥2 H1 + · · · + ∥vJ∥2 HJ ⟨v, q⟩ := ⟨v1, q1⟩ + · · · + ⟨vJ, qJ⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (7) for v = (v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , vJ), vj ∈ Hj, and q = (q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , qJ), qj ∈ H∗ j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' If Vj, j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , J is another collection of Banach spaces and Lj : Hj → Vj are bounded linear maps, we shall also consider the block-diagonal operator diag(L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , LJ), mapping H1 × · · · × HJ into V1 × · · · × VJ and defined, for v = (v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , vJ), and q = (q1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , qJ), by ⟨q, diag(L1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , LJ) v⟩ := ⟨q1, L1(v1)⟩ + · · · + ⟨qJ, LJ(vJ)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 2 Single domain functional setting Now we need to introduce classical function spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For any Lipschitz open set ω ⊂ Rd, we consider L2(ω) := {v : ω → C measurable, ∥v∥2 L2(ω) := � ω |v(x)|2dx < +∞} and define Sobolev spaces H1(ω) := {v ∈ L2(ω), ∇v ∈ L2(ω)d} ∥v∥2 H1(ω) := ∥∇v∥2 L2(ω) + γ−2∥v∥2 L2(ω) (8) where γ > 0 is a real positive parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Incorporating γ-dependency in the norm will allow to establish γ-uniform estimates in the sequel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The space H1 0(ω) will refer to the closure of D(ω) := {ϕ ∈ C ∞(Rd), supp(ϕ) ⊂ ω, supp(ϕ) bounded} for ∥ · ∥H1(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next we introduce the space of Dirichlet traces H1/2(∂ω) := {v|∂ω, v ∈ H1(Rd)} equipped with the quotient norm ∥v∥H1/2(∂ω) := min{∥ϕ∥H1(Rd), ϕ ∈ H1(Rd) and ϕ|∂ω = v}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The topological dual to H1/2(∂ω) will be denoted H−1/2(∂ω) = H1/2(∂ω)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As detailed for example in [17, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='38], the trace map gives rise to a bounded linear operator Bω : H1(ω) → H1/2(∂ω) Bω(v) := v|∂ω ∀v ∈ D(Rd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (9) We underline that Bω refers to the trace taken from the interior of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The norm (8) gives rise to a natural right-inverse of this Dirichlet boundary trace operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We define the harmonic lifting operator B† ω : H1/2(∂ω) → H1(ω), see [21, §1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4], through norm minimization Bω · B† ω(v) = v ∀v ∈ H1/2(∂ω) and ∥B† ω(v)∥H1(ω) := min{∥φ∥H1(ω), Bω(φ) = v, φ ∈ H1(ω)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (10) Denote H1(∆, ω) := {v ∈ H1(Ω), ∆v ∈ L2(Ω)} and let nω refer to the unit normal vector field to the boundary ∂ω directed toward the exterior of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The Dirichlet trace operator ϕ �→ ϕ|∂ω, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' the Neumann trace operator ϕ �→ nω · ∇ϕ|∂ω, can be extended by density as a bounded linear map H1(ω) → H1/2(∂ω) resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' H1(∆, ω) → H−1/2(∂ω), see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [17, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 4 The Dirichlet-to-Neumann (DtN) map Tω : H1/2(∂ω) → H−1/2(∂ω) is defined as the unique bounded linear operator satisfying Tω(φ|∂ω) := nω · ∇φ|∂ω ∀φ ∈ H1(∆, ω) satisfying − ∆φ + γ−2φ = 0 in ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (11) This is a real valued and self-adjoint operator Tω = Tω and T∗ ω = Tω which induces a scalar product over H+1/2(∂ω) and the Neumann-to-Dirichlet map T−1 ω : H−1/2(∂ω) → H+1/2(∂ω) induces a scalar product over H−1/2(∂ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We set ∥v∥2 Tω := ⟨Tω(v), v⟩ ∥p∥2 T−1 ω := ⟨T−1 ω (p), p⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (12) It is a well established fact (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [21, Def.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='41] or [23, §6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3]) that ∥ · ∥H1/2(∂ω) and ∥·∥H−1/2(∂ω) are equivalent to the norms (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying the Euler equation characterizing the harmonic lifting B† ω(v) as unique solution to the minimization (10), see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [4, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2-1], we have −∆B† ω(v) + γ−2B† ω(v) = 0 in ω, so that Tω(v) = nω · ∇B†(v)|∂ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We also deduce that ∥φ|∂ω∥Tω = ∥B† ω(φ|∂ω)∥H1(ω) ≤ ∥φ∥H1(ω) for all φ ∈ H1(ω) and, in particular, we have the inequalities ∥B† ω(v)∥H1(ω) = ∥v∥Tω ∀v ∈ H1/2(∂ω), ∥Bω(u)∥Tω ≤ ∥u∥H1(ω) ∀u ∈ H1(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (13) 3 Single domain variational formulation The next step in our analysis will consist in writing Problem (1) in a variational form able to cope with a variety of boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This is why we treat the boundary condition by means of an additional Lagrange parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Let Ω ⊂ Rd,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Γ := ∂Ω refer to an open bounded Lipschitz set and its boundary and denote H(Ω × Γ) := H1(Ω) × H−1/2(Γ) Our analysis will start from a variational formulation of (1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' later referred to as the primary formulation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' that we write: find u ∈ H(Ω × Γ) such that AΩ×Γ(u) = ℓΩ×Γ (14) where the bilinear map underlying the variational problem is written as a bounded linear operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ assumed to systematically take the following form: for any u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' v ∈ H1(Ω) and p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' q ∈ H−1/2(Γ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assumption: ⟨AΩ×Γ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' p),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' q)⟩ := ⟨AΩ(u),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' v⟩ + ⟨AΓ(u|Γ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' p),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (v|Γ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' q)⟩ (A1) The map AΩ×Γ involves a volume part AΩ : H1(Ω) → H1(Ω)∗ that accounts for the Helmholtz equation in the interior of the domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For µ ∈ C and κ : Ω → C an essentially bounded 5 measurable function, it is assumed of the following form Assumptions: ⟨AΩ(u), v⟩ := � Ω µ−1∇u · ∇v − κ2uv dx, with ℑm{κ(x)2} ≥ 0, ∀x ∈ Ω supx∈Ω|κ(x)| < ∞ ℜe{µ} > 0, ℑm{µ} ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (A2) The assumptions above imply in particular that ℑm{⟨AΩ(u), u⟩} ≤ 0 ∀u ∈ H1(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The operator AΩ×Γ also involves a pure boundary part AΓ that models boundary conditions, AΓ : Hb(Γ) → Hb(Γ)∗ where Hb(Γ) := H1/2(Γ) × H−1/2(Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (15) The boundary operator AΓ involves traces on Γ and is chosen in accordance with the boundary conditions of our primary boundary value problem (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We will need to rely on the following additional assumptions Assumptions: i) ℑm{⟨AΓ(u), u⟩} ≤ 0 ∀u ∈ Hb(Γ) ii) range(AΩ×Γ) is closed in H(Ω × Γ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (A3) In the remaining of this contribution we will almost systematically take (A1)-(A2)-(A3) as assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We do not require that AΩ×Γ = A∗ Ω×Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Let us underline that the assumptions above are fulfilled by AΩ, AΓ, AΩ×Γ if and only if they are fulfilled by A∗ Ω, A∗ Γ, A∗ Ω×Γ (recall that adjunction does not involve any complex conjugation here).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The last hypothesis in (A3) implies (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [2, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='19]) range(AΩ×Γ) = ker(A∗ Ω×Γ)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (16) hence codim(range(AΩ×Γ)) = dim(ker(A∗ Ω×Γ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The source functional in (14) is assumed to take the similar form ⟨ℓΩ×Γ, (v, q)⟩ := ⟨ℓΩ, v⟩+⟨ℓΓ, (v|Γ, q)⟩ where ⟨ℓΩ, v⟩ := � Ω fv dx for some f ∈ L2(Ω) and ℓΓ ∈ Hb(Γ)∗ = H−1/2(Γ)×H+1/2(Γ) is chosen in accordance with the boundary condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Now we consider concrete boundary conditions, exhibit corresponding appropriate choices of AΓ and point how these situations fit the previous assumptions (A1)-(A2)-(A3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Here and in the following, for the sake of conciseness, we shall take the notational convention (see (11)), TΓ := TRd\\Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 (Dirichlet boundary condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In the case of a Dirichlet boundary condi- tion, we set AΓ(α, p) := (p, α) and ℓΓ := (0, g) for some g ∈ H1/2(Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have ℑm{⟨AΓ(u), u⟩} = 0 for all u, which fits i) of (A3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Formulation (14) reduces to a variational formulation of a Helmholtz problem with a Dirichlet condition imposed by means of a Lagrange parameter at the boundary u ∈ H1(Ω), p ∈ H−1/2(Γ) such that � Ω µ−1∇u · ∇v − κ2uv dx + � Γ pv dσ = � Ω fvdx ∀v ∈ H1(Ω), � Γ uq dσ = � Γ gq dσ ∀q ∈ H−1/2(Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 6 Whenever there is existence and uniqueness of the solution pair (u, p) then p = −nΩ · ∇u|Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Conditions in (A2) guarantee that the volume part of this equation is coercive modulo the compact term attached to κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Hence the operator associated to this system is of Fredholm type with index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular it has closed range, which fits ii) of (A3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 (Neumann boundary condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In the case of Neumann conditions, the boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) := (0, T−1 Γ p) and ℓΓ := (g, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Again we have ℑm{⟨AΓ(u), u⟩} = 0 for all u, so this choice also matches i) of (A3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The primary formulation (14) writes u ∈ H1(Ω), p ∈ H−1/2(Γ) such that � Ω µ−1∇u · ∇v − κ2uv dx = � Ω fvdx + � Γ gvdσ ∀v ∈ H1(Ω), � Γ qT−1 Γ p dσ = 0 ∀q ∈ H−1/2(Γ), (17) where u is decoupled from p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Actually we have in particular p = 0 and this variable is not supposed to receive any particular interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since T−1 Γ : H−1/2(Γ) → H1/2(Γ) is an isomorphism, the operator AΩ×Γ associated to (17) is of Fredholm type with index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 (Robin boundary condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Consider a bounded linear map Λ : H1/2(Γ) → H−1/2(Γ) that satisfies ℜe{⟨Λ(v), v⟩} > 0 ∀v ∈ H1/2(Γ)\\{0} (as a typical example: Λ(v) = λv with λ > 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In this case again the boundary data is g ∈ H−1/2(Γ) and we choose AΓ(α, p) := (−iΛα, T−1 Γ p) and ℓΓ := (g, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This choice of AΓ corresponds to the boundary condition nΩ · ∇u|Γ − iΛ(u) = 0 on Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Formulation (14) writes u ∈ H1(Ω), p ∈ H−1/2(Γ) such that � Ω µ−1∇u · ∇v − κ2uv dx − i � Γ vΛ(u)dσ = � Ω fvdx + � Γ gvdσ ∀v ∈ H1(Ω) � Γ qT−1 Γ p dσ = 0 ∀q ∈ H−1/2(Γ) which is a variant of (17) involving i � Γ vΛ(u)dσ as an additional term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Again p is decoupled from the rest of the system and p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Again the operator AΩ×Γ associated to this system is of Fredholm type with index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 4 Multi-domain functional setting The boundary value problem (1) has been reformulated as an equivalent global variational problem with (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As we aim at extending an analytical framework for domain decomposition by substructuration though, we are going to reshape Formulation (14), adapting it to a multi- domain geometrical configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For this, we need to introduce notations adapted to domain decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Consider a decomposition into a collection of non-overlapping Lipschitz open sets Ωj ⊂ Rd, j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , J that satisfy Ω = Ω1 ∪ · · · ∪ ΩJ, with Ωj ∩ Ωk = ∅ for j ̸= k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (18) Such a decomposition may very well admit a non-trivial wire-basket i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' the set of cross points is non-empty, and we wish to underline that this situation is covered by the subsequent analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We shall refer to the skeleton of the decomposition by Σ := ∂Ω1 ∪ · · · ∪ ∂ΩJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (19) 7 Note that Γ = ∂Ω ⊂ Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We need to introduce notations for function spaces adapted to this multi-domain setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In this context, cartesian product spaces are probably the most natural, so we set Hb(Γ) := H 1 2 (Γ) × H− 1 2(Γ) H(Ω) := Hb(Γ) × H1(Ω1) × · · · × H1(ΩJ) H(Σ) := H 1 2 (Γ) × H 1 2(∂Ω1) × · · · × H 1 2(∂ΩJ) (20) As cartesian products, these spaces are equipped with norms and duality pairings given by (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Apart from the boundary terms attached to Hb(Γ), the space H(Ω) should be understood as functions defined over Ω, admitting potential jumps through interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The space H(Σ) consists in tuples of Dirichlet traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Its dual is H(Σ)∗ = H− 1 2 (Γ) × H− 1 2(∂Ω1) × · · · × H− 1 2(∂ΩJ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We need to introduce several operators acting in these spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' First we shall consider the operator T : H(Σ) → H(Σ)∗ defined as the block diagonal operator acting locally in each subdomain T := diag(TΓ, TΩ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , TΩJ) where TΓ := TR\\Ω (21) and each TΩj is defined with (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The norms ∥ · ∥T and ∥ · ∥T−1 defined by (6) and (21) are equivalent to ∥ · ∥H(Σ) and ∥ · ∥H(Σ)∗, which stems from the analogous property being satisfied locally by each TΩj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' These norms will play an important role in the subsequent analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next we introduce a boundary trace operator B : H(Ω) → H(Σ) and defined by B := diag(BΓ, BΩ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , BΩJ) where BΓ(α, p) := α (22) and each BΩj is the Dirichlet trace operator interior to subdomain Ωj as defined in (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' By definition of T we have ∥B(u)∥T ≤ ∥u∥H(Ω) for all u ∈ H(Ω), since a similar inequality is satisfied in each subdomain locally according to (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We can also form a multi-domain harmonic lifting map B† : H(Σ) → H(Ω) defined as the block-diagonal operator as follows B† = diag(B† Γ, B† Ω1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , B† ΩJ) where B† Γ(α) := (α, 0) (23) and each B† Ωj as defined in (10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' With this definition we have BB† = Id and B†B is an orthogonal projector in H(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally we also need to consider a restriction operator R : H(Ω×Γ) → H(Ω) that embeds pairs (u, p) ∈ H(Ω×Γ) = H1(Ω)×H−1/2(Γ) into the cartesian product H(Ω) by restricting locally to each subdomain R(u, p) := ((u|Γ, p), u|Ω1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , u|ΩJ) for u ∈ H1(Ω), p ∈ H−1/2(Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (24) The image of this operator range(R) = R(H(Ω×Γ)) is a particular subspace of H(Ω) spanned by tuples of functions that match through interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This matching property is precisely 8 what characterizes Dirichlet transmission conditions through interfaces of the decomposition (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This is why we dedicate notations to this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' X(Ω) := {R(u, p), u ∈ H1(Ω), p ∈ H−1/2(Γ)} X(Σ) := {B(u), u ∈ X(Ω)} X(Σ)◦ := {p ∈ H(Σ)∗, ⟨p, v⟩ = 0 ∀v ∈ X(Σ)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (25) A rapid inspection of the previous definitions shows that X(Σ) = {(u|Γ, u|∂Ω1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , u|∂ΩJ), u ∈ H1(Ω)} i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' these are the tuples of Dirichlet traces that match through interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The space X(Σ) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' X(Ω)) is a closed subspace of H(Σ) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' H(Ω)) that encodes the Dirichlet transmission conditions through interfaces, while X(Σ)◦ is a closed subspace of H(Ω)∗ that encodes the Neumann transmission conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Indeed, considering restriction to interfaces in the sense of distributions, (v0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , vJ) ∈ X(Σ)◦ =⇒ vj = +vk on Γj ∩ Γk, (p0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , pJ) ∈ X(Σ)◦ =⇒ pj = −pk on Γj ∩ Γk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (26) It is clear from these definitions that X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular ker(B) ⊂ X(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Recall the definition of polar sets given by (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The following lemma is a continuous counterpart to [6, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' i) ker(B)◦ = range(B∗) ii) ker(B∗) = {0} iii) X(Ω) = B−1(X(Σ)) iv) X(Ω)◦ = B∗(X(Σ)◦) Proof: The first and second results are direct consequences of the surjectivity of the trace map B : H(Ω) → H(Σ) combined with Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='7, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='12 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='15 of [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The third result is a rephrasing of X(Ω) = {u ∈ H(Ω), B(u) ∈ X(Σ)} in condensed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' To prove the last result, first observe that B∗(X(Σ)◦) ⊂ X(Ω)◦ by routine verifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Now pick an arbitrary p ∈ X(Ω)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since ker(B) ⊂ X(Ω) ⇒ X(Ω)◦ ⊂ ker(B)◦ = range(B∗), there exists q ∈ H(Σ)∗ such that p = B∗q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For any v ∈ X(Σ), there exists u ∈ X(Ω) such that v = B(u), which implies that ⟨q, v⟩ = ⟨p, u⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' From this we conclude that q ∈ X(Σ)◦ hence p ∈ B∗(X(Σ)◦), which proves X(Ω)◦ ⊂ B∗(X(Σ)◦).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ In Item iii) of the lemma above, B−1(X(Σ)) = {u ∈ H(Ω), B(u) ∈ X(Σ)} refers to a pre-image (the operator B is obviously non-invertible i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' ker(B) ̸= {0}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The following orthogonal decomposition was established in [17, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have H(Σ)∗ = X(Σ)◦ ⊕ T(X(Σ)) and this decomposition is T−1-orthogonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The orthogonal decomposition of the previous result can be used to elaborate a characteriza- tion of transmission conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The following result was established in [17, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 9 Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Let Q : H(Σ)∗ → H(Σ)∗ refer to the T−1-orthogonal projection onto T(X(Σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then the operator Π := 2Q − Id is a T−1-isometric involution i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Π2 = Id, ∥Π(q)∥T−1 = ∥q∥T−1 for all q ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Moreover, for any pair (u, p) ∈ H(Σ) × H(Σ)∗, we have (u, p) ∈ X(Σ) × X(Σ)◦ ⇐⇒ −p + iT(u) = Π(p + iT(u)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (27) The characterization above relies on an exchange operator Π which is characteristic of Opti- mized Schwarz Methods (OSM, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [1, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='37]) and ultra-weak variational formulations (UWVF) see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [3, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' An explicit expression of this operator in terms of double layer potentials attached to the equation −∆ + γ−2 was provided in [5, §5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 5 Multi-domain variational formulation Using the notations introduced in the previous sections, we now rewrite the primary formula- tion (14), decomposing it according to the subdomain partition (18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Pick u, v arbitrarily in H1(Ω) and expand the integral coming into play in the definition (A2) of AΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This leads to ⟨AΩu, v⟩ = ⟨AΩ1(u|Ω1), v|Ω1⟩ + · · · + ⟨AΩJ(u|ΩJ), v|ΩJ⟩ with ⟨AΩju, v⟩ := � Ωj µ−1∇u · ∇v − κ2uv dx (28) In the expression above only u|Ωj, v|Ωj ∈ H1(Ωj) come into play in the term attached to Ωj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The source term in (14) can be decomposed in a similar manner ℓΩ(v) = ℓΩ1(v|Ω1) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' ℓΩJ(v|ΩJ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The above decompositions lead to introducing a block-diagonal operator A : H(Ω) → H(Ω)∗ associated to these local bilinear forms i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' defined by A := diag(AΓ, AΩ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , AΩJ) so that AΩ×Γ = R∗AR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (29) We have factorized the operator of our primary boundary value problem AΩ×Γ, and this factorization is interesting from the perspective of domain decomposition because local sub- problems are disconnected from one another in A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The following property is inherited from the assumptions we made in §3 about AΩ×Γ, µ, κ and AΓ, ℑm{⟨A(u), u⟩} ≤ 0 ∀u ∈ H(Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (30) We also need a unique solvability property for local problems with impedance boundary con- dition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Because we do not make much specific assumptions regarding the boundary operator AΓ, we take this further property as an assumption: Assumption: A − iB∗TB : H(Ω) → H(Ω)∗ is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (A4) A notable consequence of (A2), (A3) and (A4) is that ker(A) ∩ ker(B) = {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since A, T and B are subdomain-wise block-diagonal, the assumption above is actually equivalent to imposing that each AΩj − iB∗ ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ and AΓ − iB∗ ΓTΓBΓ : Hb(Γ) → Hb(Γ)∗ are 10 isomorphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' These conditions are fulfilled in many concrete circumstances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As regards interior contributions, for example, we have the following simple consequence of the unique continuation principle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2) and that µ, κ are constants (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' do not depend on x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then for any j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , J the operator AΩj − iB∗ ΩjTΩjBΩj : H(Ωj) → H(Ωj)∗ is an isomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: Let us denote ω = Ωj for the sake of conciseness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to (A2), there exists α > 0 such that α∥u∥2 H1(ω) ≤ ℜe{⟨˜Aω(u), u⟩} ∀u ∈ H1(ω), ⟨˜Aω(u), v⟩ := ⟨(Aω − iB∗ ωTωBω)u, v⟩ + � ω(1 + κ2)uvdx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying Lax-Milgram’s lemma, we see that the operator ˜Aω : H(ω) → H(ω)∗ is an isomor- phism hence, since it differs by a compact perturbation, that Aω−iB∗ ωTωBω is of Fredholm type with index 0, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [17, Chap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' There only remains to prove that ker(Aω − iB∗ ωTωBω) = {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Pick any u ∈ H1(ω) such that (Aω − iB∗ ωTωBω)u = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then we have ∥Bω(u)∥2 Tω ≤ −ℑm{⟨(Aω − iB∗ ωTωBω)u, u⟩} = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' From this we conclude that u|∂ω = Bω(u) = 0 hence Aω(u) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand Aω(u) = 0 ⇒ nω · ∇u|∂ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' There only remains to apply the unique continuation principle, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 in [24], to conclude that u = 0 in ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Regarding classical boundary conditions and the associated choice of AΓ, we can also examine the invertibility of AΓ − iB∗ ΓTΓBΓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 (Dirichlet condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, in this situation we have the following expression (AΓ−iB∗ ΓTΓBΓ)(α, p) = (p−iTΓα, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We conclude that AΓ − iB∗ ΓTΓBΓ is continuously invertible with (AΓ − iB∗ ΓTΓBΓ)−1(p, α) = (α, p + iTΓα).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 (Neumann condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, we have (AΓ − iB∗ ΓTΓBΓ)(α, p) = (−iTΓα, T−1 Γ p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We conclude that AΓ − iB∗ ΓTΓBΓ is continuously invertible with (AΓ − iB∗ ΓTΓBΓ)−1(p, α) = (iT−1 Γ p, TΓα).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4 (Robin condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we have (AΓ−iB∗ ΓTΓBΓ)(α, p) = (−i(Λ+TΓ)α, T−1 Γ p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Because ℜe{⟨Λ(α), α⟩} > 0 for all α ∈ H1/2(Γ), we see that Λ+TΓ is coercive hence invertible and AΓ−iB∗ ΓTΓBΓ is then continuously invertible with (AΓ − iB∗ ΓTΓBΓ)−1(p, α) = (i(Λ + TΓ)−1p, TΓα).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Similarly to what precedes, define ℓ ∈ H(Ω)∗ by ⟨ℓ, v⟩ = ℓΓ(v0, q) + ℓΩ1(v1) + · · · + ℓΩJ(vJ), and we have ℓΩ×Γ = R∗ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The primary variational problem (14) can then rewritten by means of A as follows: find u ∈ H(Ω × Γ) such that ⟨AR(u), R(v)⟩ = ⟨ℓ, R(v)⟩ for all v ∈ H(Ω × Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Making use of the definition of X(Ω) as the image of R see (25), this also rewrites u ∈ X(Ω) and ⟨A(u), v⟩ = ⟨ℓ, v⟩ ∀v ∈ X(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (31) 11 6 Closed linear manifolds interpretation Formulation (14) which is the starting point of this study, is not assumed to be a priori uniquely solvable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The kernel of AΩ×Γ might be non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In many relevant applications though, it is of Fredholm type, and this is why we are interested in studying how this Fredholmness carries over in the multi-domain context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For this we are going to consider the skew-symmetric bilinear form [·, ·] : ( H(Σ) × H(Σ)∗)2 → C defined by [(u, p), (v, q)] := ⟨u, q⟩ − ⟨v, p⟩ u, v ∈ H(Σ), p, q ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (32) This form is obviously non-degenerate and can be used as a duality pairing over the space of tuples of Dirichlet-Neumann pairs of traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Indeed denote H (Σ) := H(Σ) × H(Σ)∗ with norm: ∥(v, q)∥2 T×T−1 := ∥v∥2 T + ∥q∥2 T−1 then for any ϕ ∈ H (Σ)∗, there exists a unique u ∈ H (Σ) such that [u, v] = ϕ(v) ∀v ∈ H (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In other words, the pairing (32) puts H (Σ) in self-duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We now introduce the subspace of so-called Cauchy data that directly relates to the boundary value problem under study, C (A) := {(B(u), p) | (u, p) ∈ H(Ω) × H(Σ)∗, Au = B∗p} (33) It must be understood as the space of tuples of Dirichlet-Neumann trace pairs stemming from solutions to the problems local to each subdomain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' If A : H(Ω) → H(Ω)∗ is an isomorphism, we can define the associated Neumann-to-Dirichlet operator NtDA := BA−1B∗ and then C (A) := {(NtDA(p), p) | p ∈ H(Σ)∗} appears to be the graph of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand C (A) is properly defined even if A fails to be invertible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The application (v, p) → p − iT(v) continuously and isomor- phically maps C (A) into H(Σ)∗ and, for all (v, p) ∈ C (A), satisfies the estimates ∥v∥2 T + ∥p∥2 T−1 ≤ ∥p − iTv∥2 T−1 1 2∥p − iTv∥2 T−1 ≤ ∥v∥2 T + ∥p∥2 T−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: It suffices to prove surjectivity and the estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' To prove surjectivity, pick an arbitrary q ∈ H(Σ)∗ and define u = (A − iB∗TB)−1B∗q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The pair (v, p) = (B(u), q + iTB(u)) satisfies Au = B∗p so that (v, p) ∈ C (A) and, by construction, we have p − iTv = q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' To prove the estimates, pick an arbitrary pair (v, p) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to (33) there exists u ∈ H(Ω) such that B(u) = v and A(u) = B∗(p), hence ⟨p, v⟩ = ⟨p, B(u)⟩ = ⟨B∗(p), u⟩ = ⟨A(u), u⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking account of (30), we deduce 0 ≤ ℜe{i⟨p, v⟩} ≤ ∥v∥2 T +∥p∥2 T−1 and conclude 0 ≤ ∥p − iTv∥2 T−1 − (∥v∥2 T + ∥p∥2 T−1) ≤ ∥v∥2 T + ∥p∥2 T−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ In the previous lemma, the space of Cauchy data has been proven boundedly isomorphic to a Hilbert space and, as such, is closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 12 Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The subspace C (A) is closed in H (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The space of Cauchy data can be complemented in various ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The next proposition exhibits one possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Define G (iT) := {(v, iT(v)), v ∈ H(Σ)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then H (Σ) = C (A) ⊕ G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: First of all, assume that (u, p) ∈ C (A) ∩ G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This that there exists v ∈ H(Ω) such that Av = B∗p and Bv = u, and that p = iTu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Combining these equations yields (A − iB∗TB)v = 0 hence v = 0 according to Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, and finally (u, p) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have proved that C (A) ∩ G (iT) = {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Now take an arbitrary (u, p) ∈ H(Σ) × H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since B : H(Ω) → H(Σ) is surjective, there exists w ∈ H(Ω) such that B(w) = u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Define v ∈ H(Ω) by v = (A − iB∗TB)−1(Aw − B∗p) which is valid a definition since A − iB∗TB : H(Ω) → H(Ω)∗ is an isomorphism according to Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have in particular A(w − v) = B∗(p − iTBv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Set u1 = B(v), p1 = iTu1 = iTB(v), u2 = B(w − v) = u − u1, p2 = p − iTBv = p − p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (34) By construction we have (u1, p1) ∈ G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Moreover B(w −v) = u2 and A(w −v) = B∗p2 so that (u2, p2) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally, the second line in (34) indicates that (u, p) = (u1, p1)+(u2, p2) which thus proves (u, p) ∈ C (A) + G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have just established that C (A) + G (iT) = H(Σ) ⊕ H(Σ)∗ which ends the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ The space G (iT) is simply the graph of the (bounded) operator iT : H(Σ) → H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In the present analysis, it plays a secondary role and shall be used only to prove results about C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have the following immediate result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Define G (iT)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ G (iT)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then G (iT)♯ = G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The proof is definitely straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This result means that G (iT) is its own polar set under the pairing [·, ·].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As we see now, the space C (A) fulfills a similar property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Define C (A)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ C (A)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then C (A)♯ = C (A∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: First of all we have C (A∗) ⊂ C (A)♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Indeed take any (u, p) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' By definition, there exists w ∈ H(Ω) such that B(w) = u and Aw = B∗p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then for any (u′, p′) ∈ C (A∗), since B(w′) = u′ and A∗w′ = B∗p′ for some w′ ∈ H(Ω), we have [(u, p), (u′, p′)] = ⟨u, p′⟩ − ⟨u′, p⟩ = ⟨B(w), p′⟩ − ⟨B(w′), p⟩ = ⟨w, B∗(p′)⟩ − ⟨w′, B∗(p)⟩ = ⟨w, A∗(w′)⟩ − ⟨w′, A(w)⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 13 Hence, to finish the proof, we need to show that C (A)♯ ⊂ C (A∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For that, pick an arbitrary u = (u, p) ∈ C (A)♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The hypothesis of Section 3 hold for A∗ Ω×Γ instead of AΩ×Γ, hence we can apply Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 to A∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This yields a decomposition u = u1 +u2 for some u1 ∈ C (A∗) and some u2 ∈ G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have to prove that u2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' By assumption we have 0 = [u, v] = [u1, v] + [u2, v] = [u2, v] ∀v ∈ C (A), since C (A) ⊂ C (A∗)♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4 implies that 0 = [u2, v] = [u2, v + v′] for all v ∈ C (A) and all v′ ∈ G (iT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since C (A) ⊕ G (iT) = H (Σ) according to Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we conclude that 0 = [u2, w] ∀w ∈ H (Σ) hence finally u2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This shows that u = u1 ∈ C (A∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have just established that C (A)♯ ⊂ C (A∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ We point that, because C (A) is closed, the previous result also implies that C (A) = C (A∗)♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Self-polarity appears to be a property of the following subspace (see Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3) that is pivotal in characterizing transmission conditions X (Σ) := X(Σ) × X(Σ)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Indeed we have X (Σ) = X (Σ)♯ := {u ∈ H (Σ), [u, v] = 0 ∀v ∈ X (Σ)} by the very definition of X (Σ), as X(Σ)◦◦ = X(Σ) since X(Σ) is a closed subspace of H(Σ) (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [22, Thm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='7] or [2, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='9]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The next result establishes an important connection between the two spaces C (A), X (Σ) and our primary boundary value problem (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The operator u �→ (BR(u), (B†)∗AR(u)) continuously and isomorphically maps ker(AΩ×Γ) onto C (A) ∩ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As a consequence dim(ker(AΩ×Γ) ) = dim(C (A) ∩ X (Σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: Let u ∈ H(Ω×Γ) satisfy AΩ×Γ(u) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular R(u) ∈ X(Ω) and AR(u) ∈ X(Ω)◦, see (24) and (31).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to iv) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, there exists p ∈ X(Σ)◦ such that AR(u) = B∗p and it is unique since B∗ : H(Σ)∗ → H(Ω)∗ is injective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have (B†)∗AR(u) = (B†)∗B∗p = (BB†)∗p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Setting v := B · R(u), by construction (v, p) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We also have v ∈ X(Σ) since R(u) ∈ X(Ω), so that (v, p) ∈ X(Σ) × X(Σ)◦ = X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In addition, the formula (v, p) = (BRu, (B†)∗ARu) establishes the continuous dependency of (v, p) on u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally, consider an arbitrary pair (v, p) ∈ C (A) ∩ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since (v, p) ∈ C (A), there exists w ∈ H(Ω) such that Aw = B∗p and B(w) = v, and such a w is unique since ker(A)∩ker(B) = {0}, according to Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As v ∈ X(Σ), we have w ∈ X(Ω) = B−1(X(Σ)) according to iii) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, so there exists u ∈ H(Ω × Γ) such that R(u) = w and such a u is unique due to the injectivity of R : H(Ω × Γ) → H(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This leads to AR(u) = B∗p and p ∈ X(Σ)◦ ⇒ B∗p ∈ X(Ω)◦ = ker(R∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since X(Ω) = R(H(Ω × Γ)), we conclude that 0 = R∗AR(u) = AΩ×Γ(u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The operator (u, p) �→ R∗(B∗p − AB†u) continuously maps (C (A∗) ∩ X (Σ))♯ into range(AΩ×Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 14 Proof: Take an arbitrary (u, p) ∈ (C (A∗) ∩ X (Σ))♯ and set f = R∗(B∗p − AB†u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Ap- plying Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6 to A∗ Ω×Γ instead of AΩ×Γ shows that ϕ ∈ ker(A∗ Ω×Γ) ⇒ (v, q) = (BR(ϕ), (B†)∗A∗R(ϕ)) ∈ C (A∗) ∩ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Hence ⟨f, ϕ⟩ = ⟨R∗(B∗p − AB†u), ϕ⟩ = ⟨p, BRϕ⟩ − ⟨u, (B†)∗A∗Rϕ⟩ = [(v, q), (u, p)] = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This proves f ∈ ker(A∗ Ω×Γ)◦ = range(AΩ×Γ) according to (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then C (A) + X (Σ) = (C (A∗) ∩ X (Σ))♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular the subspace C (A) + X (Σ) is closed in H (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: Clearly we have C (A) + X (Σ) ⊂ (C (A∗) ∩ X (Σ))♯, so we only need to establish that (C (A∗) ∩ X (Σ))♯ ⊂ C (A) + X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Pick any pair (pd, pn) ∈ (C (A∗) ∩ X (Σ))♯.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='7 we have R∗(B∗pn − AB†pd) ∈ range(AΩ×Γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying the definition of A given by (29), there exists ϕ ∈ X(Ω) satisfying ⟨Aϕ, w⟩ = ⟨B∗pn − AB†pd, w⟩ for all ∀w ∈ X(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Set φ = ϕ + B†(pd) and ud = B(φ) = B(ϕ) + pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' By construction, ⟨A(φ), w⟩ = ⟨pn, B(w)⟩ = 0 ∀w ∈ ker(B) ⊂ X(Ω), which rewrites A(φ) ∈ ker(B)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying i) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 we have Aφ = B∗un for some un ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This implies in particular un = (BB†)∗un = (B†)∗B∗un = (B†)∗Aφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand pd − ud = −Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω) hence ⟨pn−un, w⟩ = ⟨Aφ, B†w⟩−⟨Aφ, B†w⟩ = 0, which implies pn−un ∈ X(Σ)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally (ud, un) ∈ C (A) and (pd, pn) − (ud, un) ∈ X (Σ) imply that (pd, pn) ∈ C (A) + X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then codim(C (A) + X (Σ) ) = codim(range(AΩ×Γ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: We have (C (A) + X (Σ))♯ = C (A)♯ ∩ X (Σ)♯ see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [2, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5 applied to A∗, and since X (Σ)♯ = X (Σ) by construction, we conclude that (C (A) + X (Σ))♯ = C (A∗) ∩ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As the bilinear pairing [·, ·] is non-degenerate and C (A) + X (Σ) is closed according to Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='8, we conclude codim(C (A) + X (Σ)) = dim((C (A) + X (Σ))♯) = dim(C (A∗) ∩ X (Σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' There only remains to apply Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6 to A∗ Ω×Γ combined with (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ 7 Scattering operator Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='8 and Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='9 above show that the kernel and the range of AΩ×Γ are closely related to the pair of subspaces C (A), X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This can be exploited to study other formulations of the same boundary value problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' If u ∈ X(Ω) satisfies (31), then there exists a unique p ∈ H(Σ)∗ such that the pair (u, p) satisfies u ∈ H(Ω), p ∈ H(Σ)∗, Au − B∗p = ℓ, − p + iTBu = Π(p + iTBu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (35) 15 Reciprocally if the pair (u, p) ∈ H(Ω) × H(Σ)∗ satisfies (35), then u satisfies (31).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: Assume first that u ∈ X(Ω) satisfies (31).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This formulation rewrites equivalently as Au − ℓ ∈ X(Ω)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since X(Ω)◦ = B∗(X(Σ)◦) according to iv) Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, and as B∗ : H(Σ)∗ → H(Ω)∗ is injective (B is surjective), there exists a unique p ∈ X(Σ)◦ such that Au − ℓ = B∗p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand, u ∈ X(Ω) ⇒ B(u) ∈ X(Σ) according to iii) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally applying Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we obtain −p + iTBu = Π(p + iTBu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally, assume that (35) holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then, according to Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we have p ∈ X(Σ)◦ and B(u) ∈ X(Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Moreover we have B(u) ∈ X(Σ) ⇒ u ∈ X(Ω) according to iii) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since p ∈ X(Σ)◦, we have B∗p ∈ X(Ω)◦ so that, for any v ∈ X(Ω) we have 0 = ⟨B∗p, v⟩ = ⟨Au − ℓ, v⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' To sum up, we have proved that u ∈ X(Ω) and ⟨Au, v⟩ = ⟨ℓ, v⟩ ∀v ∈ X(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ In a domain decomposition context, a substructuring strategy applied to Problem (14) nat- urally leads to eliminating the volume unknowns in (35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This is performed by means of a scattering map that takes ingoing traces as input and returns outgoing traces as output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' There exists a unique bounded linear map S : H(Σ)∗ → H(Σ)∗, later referred to as scattering operator, satisfying p + iTv = S(p − iTv) ∀(v, p) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (36) It is also given by the formula S = Id+ 2iTB(A − iB∗TB)−1B∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' It is T−1-contractive and, for any q ∈ H(Σ)∗, satisfies ∥S(q)∥2 T−1 + 4|ℑm{⟨A(u), u⟩}| = ∥q∥2 T−1 where u = (A − iB∗TB)−1B∗q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: We follow the proof pattern presented e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' in [6, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' First of all, Identity (36) clearly and unambiguously defines the operator S as a linear map according to Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next, pick an arbitrary q ∈ H(Σ)∗ and set u = (A − iB∗TB)−1B∗q and p = q + iTB(u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have Au − B∗p = 0 and q = p − iTB(u) and S(q) = p + iTB(u) = q + 2iTB(u), which leads to S(q) = (Id + 2iTB(A − iB∗TB)−1B∗)q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally developing the squared norm, and taking account of (30), we have ∥S(q)∥2 T−1 = ∥p + iTB(u)∥2 T−1 = ∥p − iTB(u)∥2 T−1 + 4ℑm{⟨q, B(u)⟩} + 4∥B(u)∥2 T = ∥q∥2 T−1 + 4ℑm{⟨B∗(q), u⟩} + 4∥B(u)∥2 T = ∥q∥2 T−1 + 4ℑm{⟨A(u), u⟩} − 4ℑm{i⟨B∗TB(u), u⟩} + 4∥B(u)∥2 T = ∥q∥2 T−1 − 4|ℑm{⟨A(u), u⟩}| □ The space of Cauchy data was used to characterize the scattering operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally, the scattering operator provides a characterization of the space of Cauchy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The following result should be compared with (27).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 16 Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' For any (v, p) ∈ H (Σ) we have: (v, p) ∈ C (A) ⇐⇒ p + iTv = S(p − iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: From the very definition of the scattering operator in Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, it is clear that (v, p) ∈ C (A) ⇒ p + iTv = S(p − iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally pick arbitrarily some (v, p) ∈ H (Σ) such that p + iTv = S(p − iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We know from Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 that there exists v′ ∈ H(Σ) such that (v − v′, p − iTv′) ∈ C (A) so applying Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 we obtain (p − iTv′) + iT(v − v′) = S( (p − iTv′) − iT(v − v′) ) ⇐⇒ p + iTv − 2iTv′ = S(p − iTv) ⇐⇒ 2iTv′ = 0 =⇒ v′ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ The scattering operator has a subdomain-wise block diagonal structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This is clearly visible from the formula S = Id + 2iTB(A − iB∗TB)−1B∗ where each term in the right hand side is block diagonal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This yields S = diag(SΓ, SΩ1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' , SΩJ) where SΩj = Id + 2iTΩjBΩj(AΩj − iB∗ ΩjTΩjBΩj)−1B∗ Ωj where SΓ = Id + 2iTΓBΓ(AΓ − iB∗ ΓTΓBΓ)−1B∗ Γ Let us discuss the particular form that takes the boundary scattering operator SΓ for Dirichlet, Neumann and Robin conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Recall that BΓ : Hb(Γ) := H1/2(Γ) × H−1/2(Γ) → H1/2(Γ) is defined by BΓ(α, p) = α hence B∗ Γ(p) = (p, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4 (Dirichlet condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, since B∗ Γp = (p, 0) for all p ∈ H−1/2(Γ), we conclude that BΓ(AΓ − iB∗ ΓTΓBΓ)−1B∗ Γ = 0 and finally SΓ = +Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5 (Neumann condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, in this situation we have BΓ(AΓ − iB∗ ΓTΓBΓ)−1B∗ Γ = iT−1 Γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This yields the expression SΓ = −Id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6 (Robin condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the same notations as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4, in this situation we have BΓ(AΓ − iB∗ ΓTΓBΓ)−1B∗ Γ = i(Λ + TΓ)−1 which yields SΓ = (Λ − TΓ)(Λ + TΓ)−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 8 Skeleton formulation Now we shall use the scattering operator of the previous section to transform further the boundary value problem (35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Once volume unknowns have been eliminated, this reduces to an equation involving only traces on the skeleton of the subdomain partition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 17 Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Define f ∈ H(Σ)∗ by f = −2iΠTB(A−iB∗TB)−1ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' If (u, p) ∈ H(Ω) × H(Σ)∗ solves (35), then q = p − iTB(u) satisfies the skeleton problem q ∈ H(Σ)∗ and (Id + ΠS)q = f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (37) Reciprocally if q satisfies the above equation then the pair (u, p) ∈ H(Ω) × H(Σ)∗, given by u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), solves (35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: If (u, p) ∈ H(Ω) × H(Σ)∗ solves (35) and q = p − iTB(u), then (A − iB∗TB)u = B∗(p − iTBu) + ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Left multiplying this equality by 2iTB(A − iB∗TB)−1 yields an expression for 2iTB(u) that can be used in p+iTB(u) = q+2iTB(u) in the last line of (35).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This eventually leads to (37).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally if q solves (37) and u = (A − iB∗TB)−1(B∗q + ℓ) and p = q + iTB(u), then we have Au = B∗(q + iTBu) + ℓ = B∗p + ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand, using the expression of f and S, the skeleton equation in (37) writes q + Π(q + 2iTB(A − iB∗TB)−1(B∗q + ℓ)) = 0 ⇐⇒ q + Π(q + 2iTB(u)) = 0 ⇐⇒ p − iTB(u) + Π(p + iTB(u)) = 0 This finally proves that the pair (u, p) satisfies (35) □ Next we investigate whether or not the skeleton formulation (8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1) is uniquely solvable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We will show that this is directly correlated to the unique solvability of (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The application (v, p) �→ p − iT(v) induces a continuous isomorphism from C (A) ∩ X (Σ) onto ker(Id + ΠS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As a consequence dim( ker(Id + ΠS) ) = dim( ker(AΩ×Γ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: First of all, if (v, p) ∈ C (A) ∩ X (Σ), then p + iTv = S(p − iTv) according to Lemma 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, and p − iTv = −Π(p + iTv) according to (27).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Combining these two identities leads to p − iTv ∈ ker(Id + ΠS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next if (v, p) ∈ C (A) ∩ X (Σ) and p − iTv = 0, then (v, p) = (0, 0) according to Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 hence the injectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally if q ∈ ker(Id + ΠS), then there exists (v, p) ∈ C (A) unique such that p − iTv = q according to Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, and applying (36), we obtain S(q) = S(p−iTv) = p+iTv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' From this later identity and (Id+ΠS)q = 0 leads to −p+iTv = Π(p+iTv) which implies (v, p) ∈ X (Σ) according to Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Hence we conclude (v, p) ∈ C (A) ∩ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The subspace range(Id + ΠS) is closed in H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 18 Proof: Define Θ : H(Σ)∗ → H (Σ) by Θ(q) := (iT−1(q), q), which satisfies 2∥q∥2 T−1 = ∥Θ(q)∥2 T×T−1 for all q ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking account that C (A) + X (Σ) is closed, see Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='8, we are going to prove that range(Id + ΠS) = Θ−1(C (A) + X (Σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Take any p ∈ range(Id + ΠS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, there exists a unique (v, q) ∈ C (A) such that 2p = (Id + ΠS)(q − iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since S(q − iTv) = q + iTv according to Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, and writing 2p = (Id + Π)p + (Id − Π)p, we obtain (Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv) ⇐⇒ (Id + Π)p + (Id − Π)p = (Id + Π)q − (Id − Π)(iTv) ⇐⇒ (Id + Π)(p − q) = −(Id − Π)(p + iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As (Id ± Π)/2 are two mutually orthogonal projectors, see Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we deduce on the one hand that (Id + Π)(p − q) = 0 and (Id − Π)(p + iTv) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This eventually leads to p − q ∈ X(Σ)◦ and p + iTv ∈ T(X(Σ)) ⇐⇒ iT−1p − v ∈ X(Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We conclude that Θ(p) − (v, q) ∈ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Hence Θ(p) ∈ C (A) + X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reciprocally pick an arbitrary p ∈ Θ−1(C (A)+X (Σ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This means that Θ(p)−(v, q) ∈ X (Σ) for some (v, q) ∈ C (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As a consequence (Id − Π)(p + iTv) = 0 and (Id + Π)(p − q) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Adding these two equations, and taking account that q +iTv = S(q −iTv) according to (36), leads to (Id + Π)(p − q) = −(Id − Π)(p + iTv) ⇐⇒ (Id + Π)p + (Id − Π)p = q − iTv + Π(q + iTv) ⇐⇒ p = (Id + ΠS)(q − iTv).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then codim( range(Id + ΠS) ) = codim( range(AΩ×Γ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: Since range(Id+ΠS) is closed according to Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we deduce that codim( range(Id+ ΠS) ) = dim( ker((Id + ΠS)∗) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, in particular the characterization of Q = (Id + Π)/2 as a T−1-orthogonal projection, show that Π2 = Id and Π∗ = T−1ΠT, so we have (Id + ΠS)∗ = (TΠ∗)−1(Id + ΠTS∗T−1)TΠ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Setting ˜S := TS∗T−1, and noting that TΠ∗ : H(Σ) → H(Σ)∗ is an isomorphism, we have dim( ker((Id + ΠS)∗) ) = dim( ker(Id + Π˜S) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Let us have a close look at ˜S, taking account of the formulas given by Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since T∗ = T, we obtain ˜S = Id + 2iTB(A∗ − iB∗TB)−1B∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We see that ˜S differs from S only in that A is replaced by A∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' As a consequence, we can apply Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, replacing AΩ×Γ with A∗ Ω×Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Using (16), this yields dim( ker(Id + Π˜S) ) = dim( ker(A∗ Ω×Γ) ) = codim( range(AΩ×Γ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ 19 If V1, V2 are Banach spaces, a bounded linear map L : V1 → V2 is of Fredholm type if and only if range(L) is closed in V2, dim( ker(L) ) < ∞ and codim( range(L) ) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In this case the index of L is the number index(L) := dim( ker(L) ) − codim( range(L) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The results of the present paragraph (in particular Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 and 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4) lead to the following corollary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Corollary 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The operator AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ is of Fredholm type if and only if Id + ΠS : H(Σ)∗ → H(Σ)∗ is of Fredholm type and, in this case, both operators have the same index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 9 Coercivity estimate Now we study quantitatively how the inf-sup constant of Id+ΠS relates to the inf-sup constant of the operator AΩ×Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Taking the cue from [6, §8], we first establish an intermediate result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Recall that inf-sup constants are defined according to (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proposition 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then infsup H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥) inf u∈C (A)\\{0} v∈X (Σ)\\{0} ∥u + v∥T×T−1 ∥u∥T×T−1 where ∥A∥ := sup u,v∈H(Ω)\\{0} |⟨u, A(v)⟩| ∥u∥H(Ω)∥v∥H(Ω) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: In the case where C (A)∩X (Σ) ̸= {0}, the inf-sup constant vanishes since ker(AΩ×Γ) ̸= {0} according to Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' So the estimate is automatically satisfied in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We shall assume C (A) ∩ X (Σ) = {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='6 this leads to ker(AΩ×Γ) ̸= {0} α := infsup H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (38) Now pick any u ∈ C (A) \\ {0} and any v ∈ X (Σ) \\ {0}, and set (pd, pn) := u + v ∈ H (Σ) = H(Σ)×H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The invertibility of AΩ×Γ provides the existence of a unique ϕ ∈ X(Ω) satisfying ⟨A(ϕ), w⟩ = −⟨AB†(pd), w⟩ + ⟨pn, B(w)⟩ for all w ∈ X(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular α ∥ϕ∥H(Ω) ≤ ∥A∥ ∥pd∥T + ∥pn∥T−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (39) Set φ = ϕ+B†(pd) and ud = B(φ) = B(ϕ)+pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' By construction, for any w ∈ H(Ω) satisfying B(w) = 0 we have ⟨A(φ), w⟩ = ⟨pn, B(w)⟩ = 0, which rewrites A(φ) ∈ ker(B)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Applying i) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 we have Aφ = B∗un for some un ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This implies in particular un = (BB†)∗un = (B†)∗B∗un = (B†)∗Aφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' From the previous definitions, and the fact that ∥B(w)∥T ≤ ∥w∥H(Ω) and ∥B†(q)∥H(Ω) = ∥q∥T, we obtain the estimates ∥φ∥H(Ω) ≤ ∥ϕ∥H(Ω) + ∥pd∥T ∥ud∥T ≤ ∥φ∥H(Ω) ∥un∥T−1 ≤ ∥A∥ ∥φ∥H(Ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' (40) 20 We have Aφ = B∗un and Bφ = ud hence (ud, un) ∈ C (A) by construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand we have pd − ud = Bϕ ∈ X(Σ) since ϕ ∈ X(Ω) and, for any w ∈ X(Σ) we have B†(w) ∈ X(Ω) hence ⟨pn − un, w⟩ = ⟨Aφ, B†w⟩ − ⟨Aφ, B†w⟩ = 0, which implies that pn − un ∈ X(Σ)◦.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally we have shown that (ud, un) ∈ C (A) and (pd, pn) − (ud, un) ∈ X (Σ) and, since p = u + v ∈ C (A) ⊕ X (Σ), we conclude that u = (ud, un).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' There only remains to combine (39) and (40) to obtain the desired estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then infsup H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≤ (1 + ∥A∥) infsup H(Σ)∗→H(Σ)∗(Id + ΠS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: In the case where ker(AΩ×Γ) ̸= {0} we also have ker(Id + ΠS) ̸= {0} according to Propo- sition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 and, in this situation, the desired estimate is satisfied, with both sides of the es- timate equal to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Hence we can assume that ker(AΩ×Γ) = {0} and in this situation both AΩ×Γ : H(Ω × Γ) → H(Ω × Γ)∗ and Id + ΠS : H(Σ) → H(Σ)∗ are are injective with closed range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Pick an arbitrary f ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to Lemma 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1, there exists a unique pair u = (ud, un) ∈ C (A) such that f = un − iT(ud) and we have ∥f∥T−1 ≤ √ 2∥u∥T×T−1 which re-writes as ∥u∥T×T−1 ∥f∥T−1×T ≥ 1 √ 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Next set g = (Id + ΠS)f and p = (pd, pn) = (T−1(g), −ig)/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We have in particular ∥g∥T−1 = √ 2∥p∥T×T−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since S(f) = S(un−iT(ud)) = un+iT(ud) according to Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, we obtain un − iT(ud) + Π(un + iT(ud)) = f + ΠS(f) = g = (Id + Π)g/2 + (Id − Π)g/2 = (Id + Π)pn − i(Id − Π)T(pd) = pn − iT(pd) + Π(pn + iT(pd)) Re-arranging the terms in the equality above so as to move all contributions involving Π in the right hand side, we obtain −(pn − un) + iT(pd − ud) = Π((pn − un) + iT(pd − ud)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' According to Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, this implies that (pd, pn) − (ud, un) ∈ X (Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Since we have (ud, un) ∈ C (A) by construction, we can apply Proposition 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1 which yields ∥(Id + ΠS)f∥T−1 ∥f∥T−1 = ∥g∥T−1 ∥f∥T−1 ≥ ∥p∥T×T−1 ∥u∥T×T−1 ≥ infsup H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ)/(1 + ∥A∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This establishes the desired estimate, since this holds for any f ∈ H(Σ)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ The estimate provided by Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 is remarkable in several respects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' First of all it holds even if ker(AΩ×Γ) is non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Secondly it does not involve any hidden “C > 0” constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' In particular it does not involve any frequency dependency, although the infsup constant of AΩ×Γ a priori depends itself on the frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' This means that, to estimate the frequency dependency of the infsup constant of Id + ΠS, it suffices to derive such an estimate for AΩ×Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' A further striking feature is that the number of subdomains J does not come into play in this estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 21 As an interesting additional result in the perspective of an effective linear solve, the contrac- tivity of Π and S leads to the coercivity of the operator Id + ΠS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' The next result can be combined with Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2 to obtain an effective estimate of the coercivity constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Corollary 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume (A1)-(A2)-(A3)-(A4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then Id + ΠS : H(Σ)∗ → H(Σ)∗ is coercive with respect to the scalar product induced by T−1 and we have inf q∈H(Σ)∗\\{0} ℜe{⟨(Id + ΠS)q, T−1q⟩} ∥q∥2 T−1 ≥ 1 2 � infsup H(Σ)∗→H(Σ)∗(Id + ΠS) �2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Proof: For any q ∈ H(Σ)∗, ∥q∥2 T−1 ≥ ∥ΠS(q)∥2 T−1 = ∥(Id + ΠS)q − q∥2 T−1 ∥q∥2 T−1 ≥ ∥ΠS(q)∥2 T−1 = ∥(Id + ΠS)q∥2 T−1 + ∥q∥2 T−1 − 2ℜe{⟨(Id + ΠS)q, T−1q⟩} =⇒ ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2 T−1 ≥ � ∥(Id + ΠS)q∥T−1/∥q∥T−1 �2/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' □ We conclude this article illustrating how the previous results lead to estimations of the coer- civity constant of the skeleton operator for a concrete case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Example 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Consider the case Rd = R2 or R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume that µ = 1, κ = k ∈ (0, +∞), and choose AΓ as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3 with ⟨Λ(u), v⟩ = k � Γ uvdσ which models the Robin condition ∂nu − iku = 0 on Γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' So we.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Assume in addition that Ω is a convex polyhedron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Then we have ⟨AΩ×Γ(u, p), (v, q)⟩ = � Ω ∇u∇v − k2uvdx − ik � Γ uvdσ + � Γ qTΓp dσ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Let us take γ = 1/k for the parameter involved in (8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' From these choices, and proceeding like in [15, Lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='4] for dealing with boundary terms on Γ, we see that the continuity modulus ∥A∥ (as defined in Proposition 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='1) can be bounded independently of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the other hand, we know from [18] that infsup H(Ω×Γ)→H(Ω×Γ)∗(AΩ×Γ) ≥ O k→∞(1/k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' We can now plug this estimate into Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='2, and we see that the inf-sup constant of Id + ΠS admits also a lower bound that behaves like O(1/k) for k → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finally combining with Corollary 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='3, we see that the coercivity constant of the skeleton formulation behaves like O(1/k2) i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' inf q∈H(Σ)∗\\{0} ℜe{⟨(Id + ΠS)q, T−1q⟩}/∥q∥2 T−1 ≥ O k→∞(1/k2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Bendali and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Boubendir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Non-overlapping domain decomposition method for a nodal finite element method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numerische Mathematik, 103(4):515–537, Jun 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 22 [2] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Brezis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Functional analysis, Sobolev spaces and partial differential equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Univer- sitext.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Springer, New York, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [3] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Cessenat and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Despres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Application of an ultra weak variational formulation of ellip- tic PDEs to the two-dimensional Helmholtz problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 35(1):255– 299, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [4] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Ciarlet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Introduction to numerical linear algebra and optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Camb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Texts Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Cambridge etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' : Cambridge University Press, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [5] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Claeys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Non-local variant of the Optimised Schwarz Method for arbitrary non- overlapping subdomain partitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' ESAIM: M2AN, 55(2):429–448, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [6] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Claeys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Nonselfadjoint impedance in Generalized Optimized Schwarz Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' IMA Journal of Numerical Analysis, November 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [7] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Claeys, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Collino, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Parolin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Nonlocal optimized schwarz methods for time- harmonic electromagnetics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 48(6):Paper No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 72, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [8] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Claeys and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Parolin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Robust treatment of cross-points in optimized Schwarz methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 151(2):405–442, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Collino, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Ghanemi, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Joly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Domain decomposition method for harmonic wave propagation: a general presentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Computer Methods in Applied Mechanics and En- gineering, 184(2):171 – 211, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [10] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Després.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Méthodes de décomposition de domaine pour les problèmes de propagation d’ondes en régime harmonique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Le théorème de Borg pour l’équation de Hill vectorielle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquen- court, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Thèse, Université de Paris IX (Dauphine), Paris, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [11] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Després, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Nicolopoulos, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Thierry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Optimized transmission conditions in do- main decomposition methods with cross-points for Helmholtz equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 60(5):2482–2507, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Gander and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Kwok.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On the applicability of Lions’ energy estimates in the analysis of discrete optimized schwarz methods with cross points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Lecture Notes in Computational Science and Engineering, 91, 01 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Gander and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Santugini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Cross-points in domain decomposition methods with a finite element discretization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 45:219–240, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [14] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Gander and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' A class of iterative solvers for the Helmholtz equation: factorizations, sweeping preconditioners, source transfer, single layer potentials, polarized traces, and optimized Schwarz methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' SIAM Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 61(1):3–76, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Graham, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Spence, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Domain decomposition with local impedance con- ditions for the Helmholtz equation with absorption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 58(5):2515– 2543, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Kato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Perturbation theory for linear operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Classics in Mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Springer- Verlag, Berlin, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Reprint of the 1980 edition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 23 [17] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' McLean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Strongly elliptic systems and boundary integral equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Cambridge: Cam- bridge University Press, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Melenk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' On generalized finite-element methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' ProQuest LLC, Ann Arbor, MI, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Thesis (Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=')–University of Maryland, College Park.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Modave, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Royer, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Antoine, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Geuzaine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' A non-overlapping domain decom- position method with high-order transmission conditions and cross-point treatment for Helmholtz problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Methods Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 368:23, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Id/No 113162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [20] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Parolin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Non-overlapping domain decomposition methods with non-local transmission- operators for harmonic wave propagation problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Theses, Institut Polytechnique de Paris, December 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [21] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Pechstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finite and boundary element tearing and interconnecting solvers for mul- tiscale problems, volume 90 of Lecture Notes in Computational Science and Engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Springer, Heidelberg, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [22] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Rudin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Functional analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' New York, NY: McGraw-Hill, 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' edition, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [23] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Steinbach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Numerical approximation methods for elliptic boundary value problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Springer, New York, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Finite and boundary elements, Translated from the 2003 German original.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' [24] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' von Petersdorff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Boundary integral equations for mixed Dirichlet, Neumann and trans- mission problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Methods Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=', 11(2):185–213, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} +page_content=' 24' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf'} diff --git a/MdFIT4oBgHgl3EQfcCt7/vector_store/index.pkl b/MdFIT4oBgHgl3EQfcCt7/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..1cec2a4a78800d98f00f56077b8f7151f4d1982c --- /dev/null +++ b/MdFIT4oBgHgl3EQfcCt7/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6bff595855c1b1315618cfa0aae9d8c56278a04a4a672dbd077bf270ca6c0d7e +size 434882 diff --git a/NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf b/NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c2711888be7f6f8d156e8248d511f62f9beedfde --- /dev/null +++ b/NNAzT4oBgHgl3EQfzP4S/content/2301.01764v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b4d4c173ea36cbd3754199be3af243b480fe1696c4a7e327a54e30eae6529d1 +size 210718 diff --git a/NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss b/NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..4c59d904881244667e158e91af0c98a86ffd7883 --- /dev/null +++ b/NNAzT4oBgHgl3EQfzP4S/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2dcd6cf92b482d5ff8c625e14009b307591333fd95783be280912102ce66f9c +size 2097197 diff --git a/NNAzT4oBgHgl3EQfzP4S/vector_store/index.pkl b/NNAzT4oBgHgl3EQfzP4S/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..754da59fa306fd64e4745966e2db09b890a1134f --- /dev/null +++ b/NNAzT4oBgHgl3EQfzP4S/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66d3fa97a9ca885259cc3eb710a9d728635525f35f3ce2ae9ebf09e90d83c2fe +size 78378 diff --git a/PNAzT4oBgHgl3EQfIfvC/vector_store/index.faiss b/PNAzT4oBgHgl3EQfIfvC/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..d6f56bf8e1c3ae8ba1273ff5bd7b921080cf1797 --- /dev/null +++ b/PNAzT4oBgHgl3EQfIfvC/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:647d5d50bbfaab9b28fc124f25517fe0cbc425fcfbb94861b7d5ec180ee1fd51 +size 2555949 diff --git a/RtFJT4oBgHgl3EQfKyxj/content/tmp_files/2301.11466v1.pdf.txt b/RtFJT4oBgHgl3EQfKyxj/content/tmp_files/2301.11466v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d43ac62cd46e90a833d1cb4bd362558a096565a --- /dev/null +++ b/RtFJT4oBgHgl3EQfKyxj/content/tmp_files/2301.11466v1.pdf.txt @@ -0,0 +1,912 @@ +arXiv:2301.11466v1 [cond-mat.mtrl-sci] 26 Jan 2023 +Hydrogen atom/molecule adsorption on 2D +metallic porphyrin: A first-principles study +Raphael M. Tromer,†,‡ Isaac M. Felix,¶ Levi C. Felix,†,‡ Leonardo D. Machado,¶ +Cristiano F. Woellner,§ and Douglas S. Galvao∗,† +†Applied Physics Department, State University of Campinas, Campinas, SP, 13083-970, +Brazil +‡Center for Computational Engineering and Sciences, State University of Campinas, +Campinas, SP, 13083-970, Brazil +¶Departamento de F´ısica Te´orica e Experimental, Universidade Federal do Rio Grande do +Norte, Natal, RN, 59072-970, Brazil. +§Physics Department, Federal University of Paran´a, UFPR, Curitiba, PR, 81531-980, +Brazil +E-mail: galvao@ifi.unicamp.br +Abstract +Hydrogen is a promising element for applications in new energy sources like fuel +cells. One key issue for such applications is storing hydrogen. And, to improve stor- +age capacity, understanding the interaction mechanism between hydrogen and possible +storage materials is critical. This work uses DFT simulations to comprehensively in- +vestigate the adsorption mechanism of H/H2 on the 2D metallic porphyrins with one +transition metal in its center. Our results suggest that the mechanism for adsorption of +H (H2) is chemisorption (physisorption). The maximum adsorption energy for atomic +hydrogen was −3.7 eV for 2D porphyrins embedded with vanadium or chromium atoms. +1 + +Our results also revealed charge transfer of up −0.43 e to chemisorbed H atoms. In con- +trast, the maximum adsorption energy calculated for molecular hydrogen was −122.5 +meV for 2D porphyrins embedded with scandium atoms. Furthermore, charge transfer +was minimal for physisorption. Finally, we also determined that uniaxial strain has a +minimal effect on the adsorption properties of 2D metallic porphyrins. +Introduction +The use of nanostructured systems in applications is predicated on obtaining new 2D mate- +rials and then understanding and manipulating their electrical,1 thermal,2–5 magnetic prop- +erties,6 among others. +Since Andre Geim and Konstantin Novoselov extracted a graphene layer from graphite +by a simple exfoliation process,7 many methods have been proposed to allow the synthe- +sis of new 2D8–13 nanostructured materials. Among these materials, there is a preference +for systems that are organic and that do not cause pollution when discarded.14,15 In this +quest for future materials, one common objective is to find solids that support the use of +alternative energy sources, which aim to replace fossil fuels.16,17 One such alternative fuel is +hydrogen, and intense research has been carried out to investigate nanostructured systems +that could serve as hosts for its storage. Many materials have been proposed for hydrogen +storage applications, such as covalent organic frameworks (COFs)18,19 and metal-organic +frameworks (MOFs).20,21 Still, to improve hydrogen storage cells, understanding the inter- +action of nanostructured systems with hydrogen is vitally important. +In addition to applications in hydrogen storage,22–25 MOFs have also been used as cat- +alysts in hydrogen evolution reactions.26–29 A type of system commonly used for the latter +purpose are MOFs constructed from porphyrin molecules.26–28 To assemble these systems, +various experimental techniques have been used to link the porphyrin molecules through +covalent bonds.26–28 Other porphyrin systems have been investigated recently, including two- +dimensional (2D) porphyrins that contain metal atoms.30,31 Still, this type of 2D system has +2 + +yet to be investigated for possible uses in hydrogen storage applications. +In this work, we investigate the interaction between 2D porphyrin metallic systems and +hydrogen atoms and molecules. The 2D porphyrin systems were assembled from a unit cell +containing a porphyrin molecule with one transition metal in the center. We considered +2D porphyrin systems with ten different transition metals embedded in the center, each +corresponding to one of the ten elements of period 4 of the periodic table. For each 2D +metallic porphyrin system, we calculated the adsorption energy with H/H2, for both relaxed +and strained systems. We verified that the interaction between 2D-porphyrin and hydrogen +atoms (molecules) corresponds to chemisorption (physisorption). +Methodology +As mentioned above, we investigated the interaction between an H atom and an H2 molecule +with 2D porphyrin systems containing one transition metal atom from period 4 of the periodic +table. The following metals were considered: scandium (Sc), titanium (Ti), vanadium (V), +chromium (Cr), manganese (Mn), iron (Fe), cobalt (Co), nickel (Ni), copper (Cu), and zinc +(Zn). Here, we use the name 2D-por-M to refer to the investigated structures in general, and +we replace M by an element symbol to refer to a specific structure. For example, 2D-por-Sc +refers to a 2D porphyrin system containing a scandium atom. +The first step of our calculations consisted in optimizing all 2D-por-M structures. Our +calculations were based on the density functional theory (DFT) formalism, as implemented +in the Quantum Espresso (QE) software.32 In the approach used, the wavefunctions were +expanded in the plane wave basis set and pseudopotentials were used to represent the core +electrons.33,34 To choose the calculation parameters, we carried out convergence tests for +the total energy against the number of k-points and the cut-off energy. After these tests, +we set the cut-off energy to 75 Ry and used a 10 × 10 × 1 k-point mesh. +The van der +Waals vdw-df is a functional which was used to describe the exchange-correlation term.35–37 +3 + +During optimization, ions and lattice vectors were varied simultaneously, and we assumed +that convergence had been achieved when the force on each atom was less than 0.05 eV/˚A. +After all 2D-por-M structures were optimized, we investigated their interaction with H atoms +and H2 molecules. For these calculations, the initial position for H/H2 was always located +above the transition metal. Preliminary tests indicated that electrostatic interactions were +stronger when a hydrogen atom/molecule was placed in this region. +For the calculations with hydrogen atoms, in the initial step, we placed a H atom +2.0 ˚A +above one of the considered metals. +Then we optimized the 2D-por-M structure +and the hydrogen position, with the constraint that H was only allowed to move in the +direction perpendicular to the 2D plane (z-direction). Note that we also tested an initial +distance of 1.0 ˚A, but found that this change did not affect the final optimized distance. +For the calculations with hydrogen molecules, in the initial step we placed a H2 molecule +3.0 ˚A above one of the considered metals, with a vertical orientation (the H-H bond was +perpendicular to the surface). During the optimization process, the hydrogen atom that was +initially closer to the metal was constrained to only move along the z-direction, whereas the +other hydrogen was allowed to move in all directions. +After the 2D-por-M structure with a hydrogen atom or molecule is optimized, the ad- +sorption energy is obtained using this expression:1,38 +Ead = E2D−por−M+H/H2 − E2D−por−M − EH/H2, +(1) +where E2D−por−M+H/H2 is the total energy of a system where 2D-por-M and H/H2 are inter- +acting, E2D−por−M is the energy of an isolated 2D-por-M system, and EH/H2 is the energy of +an isolated H atom/H2 molecule. +We also calculate the formation energy per atom for the various 2D-por-M structures +using the following expression: +Ef = (E2D−por−M − NCEC − NNEN − EM)/Nt, +(2) +4 + +where E2D−por−M is the energy of an isolated 2D-por-M system, NC/NN is the number of car- +bon/nitrogen atoms in the unit cell, EC/N/M is the energy of an isolated carbon/nitrogen/metal +atom, and Nt is the total number of the atoms in the unit cell. +Results and discussion +Figure 1 presents the 2D metallic porphyrin (2D-por-M) structure for the transition metals +M present in period 4 of the periodic table. For the ten different transition metals considered +here, the optimized lattice was a square (Lx = Ly = L) with L values ranging from 8.37 and +8.52 ˚A. Hence, the difference between lattice vectors is minimal. We also observed that the +transition metal remained at the 2D plane (xy) for all cases. As a result, the electrostatic +potential is the same above and below the 2D plane. Here, we do not consider isomeric +effects on the magnetic properties, as Singh et al. did for metallic 2d-porphyrin-vanadium.30 +Figure 1: Structure of the 2D metallic porphyrin, with the square unit cell highlighted. The +central metallic atom varied in our calculations, and we considered ten transition metals +from period 4. +Table 1 presents the formation energy per atom obtained using expression 2 for the +optimized 2D-por-M structures. +Note that the calculated values are very close, with a +slight difference of about 0.3 eV/atom. Consequently, the energy necessary to obtain all the +structures investigated here is quite similar, although experimental procedures could vary +5 + +C +N +Sc Ti V Cr Mn +Fe Co Ni Cu Znfor different metallic atoms.12,39–42 +Table 1: In the first column, we have the metallic element attached to the porphyrin struc- +ture. In the second column, we have the corresponding formation energy per atom. +2D-por-M +Ef(eV/atom) +Sc +-8.2 +Ti +-8.2 +V +-8.2 +Cr +-8.2 +Mn +-8.1 +Fe +-8.1 +Co +-8.1 +Ni +-8.1 +Cu +-8.0 +Zn +-7.9 +Figure 2 displays the spin-polarized density of states for the optimized 2D-por-M struc- +tures. The µ value in each graph indicates the corresponding total magnetic moment. It can +be observed that all structures are metallic, i.e., without a bandgap. Furthermore, notice +that structures containing metals with intermediate atomic numbers present high magnetic +moment, whereas those with smaller or higher atomic numbers have either null or insignif- +icant magnetic moment. Finally, for systems with high µ value, an apparent asymmetry in +the DOS between spin up and down states occurs, which is due to unpaired electrons. +Let us now discuss the interaction between the optimized 2D-por-M structures and H +atoms/H2 molecules. As mentioned in the methods section, we initially placed the H atom (or +H2 molecule) above the metal. Then, for the H atom, we constrained the hydrogen to relax +only in the z-direction, that is, the direction perpendicular to the plane of 2D-por-M. For +the H2 molecule, we constrained the atom closer to the plane and allowed the other atoms to +move freely. Figure 7 in the supplementary material displays the optimized structures. After +optimization, we calculated the adsorption energy using expression 1, and we present the +results for H and H2 in column 2 from tables 2 and 3, respectively. The adsorption energy is +negative for both H and H2, due to the attractive electrostatic interaction between 2D-por-M +and hydrogen, with larger negative energy values indicating stronger mutual attraction. +6 + +Spin up Spin down +-2 +-1 +0 +1 +2 +E – EFermi (eV) +E – EFermi (eV) +-10 +-5 +0 +5 +10 +DOS (arb.) +-2 +-1 +0 +1 +2 +Sc (0.0μB) +Ti (0.0μB) +V (2.6μB) +Cr (2.7μB) +Mn (3.1μB) +Co (1.0μB) +Fe (2.0μB) +Ni (0.2μB) +Cu (0.1μB) +Zn (0.2μB) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +Figure 2: Spin-polarized Density of states for 2D metallic porphyrin. +7 + +For an H atom interacting with 2D-por-M, the adsorption energy varies between -1.5 +eV (for Cu) and -3.7 eV (for V and Cr). Column 3 of table 2 presents the corresponding +equilibrium distance for H atoms, which varies between 1.44 ˚A (for Co) and 1.84 ˚A (for Sc). +Together, these results denote that H atoms chemisorb on 2D-por-M. We also analyzed the +charge transfer between H atoms and 2D-por-M, and the results are presented in column 4 +of table 2. Note that negative values indicate charge transfer from 2D-por-M to a hydrogen, +and the opposite is true for positive values. Overall, we observe high charge transfer for +calculations with H atoms, confirming the occurrence of chemisorption. Additionally, we +observe a tendency for higher charge transfer in systems containing metals with higher elec- +tropositivity.43 For instance, the least electronegative metal investigated here (Sc) produced +the largest charge transfer (-0.43 e). +For H2 molecules, adsorption energy values range from -33.4 meV (for Co) to -122.5 meV +(for Sc). All other structures present adsorption energies around -50/-60 meV, as we see +in column 2 of table 3. Concerning the equilibrium distance between metal atoms and H2 +molecules, values are presented in column 3 of table 3. When comparing these results with +those previously discussed for H atoms, we observe considerably larger equilibrium distances +for H2 molecules, varying between 2.64 ˚A (for Ti) and 3.21 ˚A (for Mn). Together, these +results indicate that H2 molecules are physisorbed on 2D-por-M. In this case, interactions are +mainly due to Van der Waals forces. Charge transfer results for H2 molecules are presented in +column 4 of table 3. Transferred charge values are much smaller in this instance, supporting +our argument that physisorption occurs for H2 molecules. Note that our results regarding +the H/H2 charge transfer process are in agreement with the literature.38,44 +The last columns of tables 2 and 3 shows the total magnetic moment of the system after +H or H2 adsorbed on 2D-por-M. Comparing these results with those presented in figure +2, we observe that the adsorbed H atom affects the magnetic moment value considerably. +In contrast, the total magnetic moment remains practically unaffected with H2 adsorption. +For chemisorption, the magnetic moment value changes because the charge transfer process +8 + +changes the electronic distribution of the monolayer. +Table 2: In the first column, we have the transition metal element considered in the calcu- +lation (M). Columns 2 and 3 present adsorption energies and equilibrium distance between +H and transition metal, while columns 4 and 5 show the charge transferred to H (negative +values) or from H (positive values) and, the total magnetic moment in the structure after +adsorption of H on 2D-por-M. +M +[Ead-H](eV) +RH−M (˚A) +qH(e) +µH (µB) +Sc +-2.9 +1.84 +-0.43 +0.0 +Ti +-3.6 +1.70 +-0.26 +0.0 +V +-3.7 +1.62 +-0.17 +1.6 +Cr +-3.7 +1.57 +-0.24 +1.7 +Mn +-3.6 +1.53 +-0.09 +2.0 +Fe +-3.4 +1.49 +-0.07 +1.0 +Co +-3.0 +1.44 +-0.05 +0.0 +Ni +-1.9 +1.46 +0.06 +0.3 +Cu +-1.5 +1.53 +-0.24 +0.0 +Zn +-1.7 +1.59 +-0.21 +1.0 +In order to gain insight into the electronic density distribution after adsorption, we cal- +culated the charge density difference of (i) an H atom on 2D-por-Cr and (ii) an H2 molecule +on 2D-por-Sc. We show the electron density of 2D-por-Cr and 2D-por-Sc because the former +presents the highest interaction energy with H and the latter with H2. Moreover, the results +from Fig. 3 illustrate well typical charge distributions obtained for the other structures +Figure 3-a)/b) presents the results for the H atom/H2 molecule. In Figs. 3-a) and 3-b) we +used isosurface values of 0.008e/V3 and 0.0008e/V3, respectively. In addition, the blue/red +regions represent electron depletion/accumulation after adsorption. +In Fig. 3-a), we note electron depletion at the Cr atom and accumulation at the hydrogen. +This result agrees with that presented in table 2, which indicated a charge transfer of −0.24 e +from 2D-por-Cr to the H atom. We typically observed charge accumulation in the hydrogen +when chemisorption occurred. In Fig. 3-b), we first note the charge transfer is tiny for +physisorption. +Looking at the H2 molecule, we observe the formation of a dipole, with +charge accumulation (red) at the H atom near the metal and depletion (blue) at the other +one. Notice that the blue region is larger than the red one, as the total charge in the molecule +9 + +Table 3: In the first column, we have the transition metal element considered in the calcu- +lation (M). Columns 2 and 3 present adsorption energies and equilibrium distance between +H2 and transition metal, while columns 4 and 5 show the charge transferred to H2 (negative +values) or from H2 (positive values) and, the total magnetic moment in the structure after +adsorption of H on 2D-por-M. +M +[Ead-H2](meV) +RH2−M (˚A) +qH2 (e) +µH2 (µB) +Sc +-122.5 +3.10 +0.009 +0.0 +Ti +-65.7 +2.64 +0.020 +0.0 +V +-52.8 +3.07 +-0.003 +2.6 +Cr +-54.4 +3.14 +0.00 +2.7 +Mn +-56.2 +3.21 +0.00 +3.1 +Fe +-59.1 +3.20 +-0.004 +0.2 +Co +-33.4 +3.16 +-0.004 +1.0 +Ni +-62.6 +3.11 +0.00 +0.0 +Cu +-60.7 +3.08 +0.00 +0.1 +Zn +-56.2 +3.13 +0.008 +0.2 +is positive (0.009e according to table 3). We also present the total density of states after +H/H2 adsorption in Figures 8 and 9 of the Supplementary Material. These results reveal +that all investigated systems remain metallic after hydrogen adsorption. +a) b) +Figure 3: Charge density difference map (a) for a hydrogen atom adsorbed on 2D-por-Cr +and (b) for a hydrogen molecule adsorbed on 2D-por-Sc. The red and blue colors represent +electron accumulation and depletion, respectively. +10 + +erHAdsorption on strained 2D-por-M +When chemisorption occurs, we observed the that the monolayer: (i) transfers charge to H +and (ii) it had its total magnetic moment reduced. In contrast, H2 physisorbed on 2D-por-M, +had little charge transfer and total magnetic moment changes. In this section, we investigate +how an uniaxial strain applied along the x-direction affects adsorption. We did not apply +strain to the y-direction because the system is isotropic in the xy plane. +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +Cu +Zn +Ead (meV) +-4.0 +-3.5 +-3,0 +-2.5 +-2.0 +-1.5 +-1.0 +No strain +3% along x +6% along x +9% along x +Figure 4: Adsorption energies for an H atom placed on a strained 2D-por-M monolayer. We +considered different metallic elements and strain values. +Figure 4 presents the adsorption energy of an H atom as a function of the applied strain +along the x-direction. We considered strain values of 3, 6, and 9%. We note that the strain +altered the adsorption energy slightly in all cases. The change is more perceptible for Co, +Ni, Cu, and Zn, where the adsorption energy decreased with the strain. Still, the maximum +variation was only 0.25 eV (for Zn). Figure 5-a) shows the hydrogen-metal distance as a +function of the applied uniaxial strain. In all cases, we observed that this distance remained +nearly unaffected. Figure 5-b) presents the change transfer between an H atom and a metal +against the strain. In this case, we observed a slight decrease in the transferred charge. +11 + +Overall, for 2D-por-M structures with an H atom, we observed that adsorption energies +and transferred charges decreased slightly with the strain, whereas the hydrogen-metal dis- +tance remained almost constant. Finally, note that the applied strain did not significantly +change the total magnetic moment of the investigated structures. +0 +3 +6 +9 +strain (%) +-0.5 +-0.4 +-0.3 +-0.2 +-0.1 +0 +0.1 +qH (e) +1.4 +1.5 +1.6 +1.7 +1.8 +1.9 +R (Å) +a) +b) +Sc Ti V Cr Mn +Fe Co Ni Cu Zn +Figure 5: a) Equilibrium distance R between an H atom and 2D-por-M and b) charge in an +H atom (qH) as a function of the uniaxial strain. +Figure 6 displays adsorption energies for a H2 molecule adsorbed on a strained 2D-por-M +monolayer. We again considered monolayers under 3%, 6%, and 9% strain in the x-direction. +The results reveal that the strain had little effect on the adsorption energies for all inves- +tigated structures. We also found that the applied strain did not modify charge transfer, +H2-metal distance, and magnetic moment values for the structures where physisorption oc- +12 + +curred. In summary, for structures with H2 molecules adsorbed on 2D-por-M, we found that +the strain had no appreciable effect on all studied quantities. +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +Cu +Zn +-140 +-120 +-100 +-80 +-60 +-40 +Ead (meV) + +No strain +3% along x +6% along x +9% along x +Figure 6: Adsorption energies for an H2 molecule placed on a strained 2D-por-M monolayer. +We considered different metallic elements and strain values. +Conclusions +In summary, we used density functional theory calculations to study the structural and +electronic properties of an H atom/H2 molecule adsorbed on the 2D metallic porphyrin +with a transition metal in its center (2D-por-M). We considered all transition metals of +row four of the periodic table. Our results revealed chemisorption of atomic hydrogens on +the monolayer, with adsorption energies ranging from −1.5 eV (for Cu) to −3.7 eV (for V +and Cr). In contrast, we found physisorption of molecular hydrogens on 2D-por-M, with +adsorption energies ranging from −33.4 meV (for Co) to −122.5 eV (for Sc). +We also +analyzed the charge transferred between the monolayer and H/H2 and found an appreciable +charge transfer in chemisorption (up to −0.43 e for Sc) but a negligible one in physisorption. +Negative values indicate electron accumulation in the hydrogen. +Moreover, we observed +13 + +that chemisorption changed the total magnetic moment moderately, as the charge transfer +process changed the electronic distribution of 2D-por-M, particularly in the cases of Fe and +Zn. +Finally, we observed that strain slightly changes the properties of monolayers with +hydrogen chemisorbed. However, the strain had practically no effect on the properties of +monolayers where physisorption occurred. +In general, we conclude that 2D-por-M can be useful in applications involving hydrogen +atoms or molecules. The sizeable mutual interaction between the monolayer and hydrogen +is crucial for applications in hydrogen storage. Moreover, it is possible to adjust the charge +transferred to the adsorbed hydrogen by changing the metal in the monolayer, an important +feature for catalysis applications. Finally, we found that the considered monolayers have +varied magnetic moments and that these can be changed through hydrogen chemisorption. +This characteristic could be useful in spintronic applications. +Acknowledgements +This work was financed in part by the Coordenac˜ao de Aperfei¸coamento de Pessoal de N´ıvel +Superior - Brasil (CAPES) - Finance Code 001, CNPq, and FAPESP. The authors thank +the Center for Computational Engineering & Sciences (CCES) at Unicamp for financial +support through the FAPESP/CEPID Grant 2013/08293-7. LDM would also like to thank +the support of the High Performance Computing Center at UFRN (NPAD/UFRN). +14 + +Supplementary Material +H/H2 adsorbed on the 2D-porphyrin-M +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +Cu +Zn +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +Cu +Zn +Z +Y +X +Figure 7: Equilibrium distance for H/H2 atom/molecule adsorbed on the 2D-porphyrin-M +for all transition metal investigated in this manuscript. +Density of states of porphyrin metalic +15 + +Spin up Spin down +-2 +-1 +0 +1 +2 +E – EFermi (eV) +E – EFermi (eV) +-10 +-5 +0 +5 +10 +DOS (arb.) +-2 +-1 +0 +1 +2 +Sc (0.0μB) +Ti (0.0μB) +V (1.6μB) +Cr (1.7μB) +Mn (2.0μB) +Co (0.0μB) +Fe (1.0μB) +Ni (0.3μB) +Cu (0.0μB) +Zn (1.0μB) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +Figure 8: Total density of states for H atom adsorbed on the 2D metallic porphyrin.. +16 + +Spin up Spin down +-2 +-1 +0 +1 +2 +E – EFermi (eV) +E – EFermi (eV) +-10 +-5 +0 +5 +10 +DOS (arb.) +-2 +-1 +0 +1 +2 +Sc (0.0μB) +Ti (0.0μB) +V (2.6μB) +Cr (2.7μB) +Mn (3.1μB) +Co (1.0μB) +Fe (0.2μB) +Ni (0.0μB) +Cu (0.1μB) +Zn (0.2μB) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +-10 +-5 +0 +5 +10 +DOS (arb.) +Figure 9: Total density of states for H2 atom adsorbed on the 2D metallic porphyrin.. +17 + +References +(1) Tromer, R. M.; Freitas, A.; Felix, I. M.; Mortazavi, B.; Machado, L.; Azevedo, S.; +Pereira, L. F. C. Electronic, optical and thermoelectric properties of boron-doped ni- +trogenated holey graphene. Phys. Chem. Chem. Phys. 2020, 22, 21147–21157. +(2) Kınacı, A.; Haskins, J. B.; Sevik, C.; C¸a˘gın, T. Thermal conductivity of BN-C nanos- +tructures. Phys. Rev. B 2012, 86, 115410. +(3) Felix, I. M.; Pereira, L. F. C. Thermal conductivity of graphene-hBN superlattice rib- +bons. Sci. Rep. 2018, 8, 1–10. +(4) Felix, I. M.; Pereira, L. F. C. Suppression of coherent thermal transport in quasiperiodic +graphene-hBN superlattice ribbons. Carbon 2020, 160, 335–341. +(5) Felix, I. M.; Pereira, L. F. C. Thermal conductivity of Thue–Morse and double-period +quasiperiodic graphene-hBN superlattices. Int. J. Heat Mass Transf. 2022, 186, 122464. +(6) Hirohata, A.; Yamada, K.; Nakatani, Y.; Prejbeanu, I.-L.; Di´eny, B.; Pirro, P.; Hille- +brands, B. Review on spintronics: Principles and device applications. J. Magn. Magn. +Mater. 2020, 509, 166711. +(7) Novoselov, K. S.; Geim, A. K.; Morozov, S. V.; Jiang, D.; Zhang, Y.; Dubonos, S. V.; +Grigorieva, I. V.; Firsov, A. A. Electric field effect in atomically thin carbon films. +Science 2004, 306, 666–669. +(8) Novoselov, K. S.; Jiang, D.; Schedin, F.; Booth, T.; Khotkevich, V.; Morozov, S.; +Geim, A. K. Two-dimensional atomic crystals. PNAS 2005, 102, 10451–10453. +(9) Wang, H.; Zhao, Y.; Xie, Y.; Ma, X.; Zhang, X. Recent progress in synthesis of two- +dimensional hexagonal boron nitride. J. Semicond. 2017, 38, 031003. +(10) Li, X.; Zhu, H. Two-dimensional MoS2: Properties, preparation, and applications. J. +Materiomics 2015, 1, 33–44. +18 + +(11) Wang, H.; Huang, X.; Lin, J.; Cui, J.; Chen, Y.; Zhu, C.; Liu, F.; Zeng, Q.; Zhou, J.; +Yu, P., et al. High-quality monolayer superconductor NbSe2 grown by chemical vapour +deposition. Nat. Commun. 2017, 8, 1–8. +(12) Shivayogimath, A.; Thomsen, J. D.; Mackenzie, D. M.; Geisler, M.; Stan, R.-M.; +Holt, A. J.; Bianchi, M.; Crovetto, A.; Whelan, P. R.; Carvalho, A., et al. A universal +approach for the synthesis of two-dimensional binary compounds. Nat. Commun. 2019, +10, 1–7. +(13) Quellmalz, A.; Wang, X.; Sawallich, S.; Uzlu, B.; Otto, M.; Wagner, S.; Wang, Z.; +Prechtl, M.; Hartwig, O.; Luo, S., et al. Large-area integration of two-dimensional +materials and their heterostructures by wafer bonding. Nat. Commun. 2021, 12, 1–11. +(14) Irimia-Vladu, M.; G�lowacki, E. D.; Voss, G.; Bauer, S.; Sariciftci, N. S. Green and +biodegradable electronics. Mater. Today 2012, 15, 340–346. +(15) Neupane, G. P.; Ma, W.; Yildirim, T.; Tang, Y.; Zhang, L.; Lu, Y. 2D organic semi- +conductors, the future of green nanotechnology. Nano Mater. Sci. 2019, 1, 246–259. +(16) Felseghi, R.-A.; Carcadea, E.; Raboaca, M. S.; Trufin, C. N.; Filote, C. Hydrogen fuel +cell technology for the sustainable future of stationary applications. Energies 2019, 12, +4593. +(17) Singla, M. K.; Nijhawan, P.; Oberoi, A. S. Hydrogen fuel and fuel cell technology for +61a cleaner future: 61a review. Environ. Sci. Pollut. Res. 2021, 1–20. +(18) Shinde, D. B.; Aiyappa, H. B.; Bhadra, M.; Biswal, B. P.; Wadge, P.; Kandambeth, S.; +Garai, B.; Kundu, T.; Kurungot, S.; Banerjee, R. A mechanochemically synthesized +covalent organic framework as a proton-conducting solid electrolyte. J. Mater. Chem. +A 2016, 4, 2682–2690. +19 + +(19) Lohse, M. S.; Bein, T. Covalent organic frameworks: structures, synthesis, and appli- +cations. Adv. Funct. Mater. 2018, 28, 1705553. +(20) Furukawa, H.; Cordova, K. E.; O’Keeffe, M.; Yaghi, O. M. The chemistry and applica- +tions of metal-organic frameworks. Science 2013, 341. +(21) Ahmed, A.; Seth, S.; Purewal, J.; Wong-Foy, A. G.; Veenstra, M.; Matzger, A. J.; +Siegel, D. J. Exceptional hydrogen storage achieved by screening nearly half a million +metal-organic frameworks. Nat. Commun. 2019, 10, 1–9. +(22) Li, H.; Wang, K.; Sun, Y.; Lollar, C. T.; Li, J.; Zhou, H.-C. Recent advances in gas +storage and separation using metal–organic frameworks. Mater. Today 2018, 21, 108– +121. +(23) Wang, H.; Zhu, Q.-L.; Zou, R.; Xu, Q. Metal-organic frameworks for energy applica- +tions. Chem 2017, 2, 52–80. +(24) Rosi, N. L.; Eckert, J.; Eddaoudi, M.; Vodak, D. T.; Kim, J.; O’Keeffe, M.; Yaghi, O. M. +Hydrogen storage in microporous metal-organic frameworks. Science 2003, 300, 1127– +1129. +(25) Yan, Q.-Q.; Wu, D.-X.; Chu, S.-Q.; Chen, Z.-Q.; Lin, Y.; Chen, M.-X.; Zhang, J.; +Wu, X.-J.; Liang, H.-W. Reversing the charge transfer between platinum and sulfur- +doped carbon support for electrocatalytic hydrogen evolution. Nat. Commun. 2019, +10, 1–9. +(26) Wang, X.; Zhang, X.; Zhou, W.; Liu, L.; Ye, J.; Wang, D. An ultrathin porphyrin-based +metal-organic framework for efficient photocatalytic hydrogen evolution under visible +light. Nano Energy 2019, 62, 250–258. +(27) Leng, F.; Liu, H.; Ding, M.; Lin, Q.-P.; Jiang, H.-L. Boosting photocatalytic hydrogen +20 + +production of porphyrinic MOFs: the metal location in metalloporphyrin matters. ACS +Catal. 2018, 8, 4583–4590. +(28) Aziz, A.; Ruiz-Salvador, A. R.; Hern´andez, N. C.; Calero, S.; Hamad, S.; Grau- +Crespo, R. Porphyrin-based metal-organic frameworks for solar fuel synthesis photo- +catalysis: band gap tuning via iron substitutions. J. Mater. Chem. A 2017, 5, 11894– +11904. +(29) Zhu, B.; Zou, R.; Xu, Q. Metal–organic framework based catalysts for hydrogen evolu- +tion. Adv. Energy Mater. 2018, 8, 1801193. +(30) Singh, H. K.; Kumar, P.; Waghmare, U. V. Theoretical prediction of a stable 2D crystal +of vanadium porphyrin: A half-metallic ferromagnet. J. Phys. Chem. C 2015, 119, +25657–25662. +(31) Luo, G.; Wang, Y.; Li, Y. Two-dimensional iron-porphyrin sheet as a promising catalyst +for oxygen reduction reaction: a computational study. Sci. Bull. 2017, 62, 1337–1343. +(32) Giannozzi, P.; Baroni, S.; Bonini, N.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; +Chiarotti, G. L.; Cococcioni, M.; Dabo, I., et al. QUANTUM ESPRESSO: a modu- +lar and open-source software project for quantum simulations of materials. J. Phys. +Condens. Matter 2009, 21, 395502. +(33) Troullier, N.; Martins, J. L. Efficient pseudopotentials for plane-wave calculations. Phys. +Rev. B 1991, 43, 1993. +(34) Bl¨ochl, P. E. Projector augmented-wave method. Phys. Rev. B 1994, 50, 17953. +(35) Dion, M.; Rydberg, H.; Schr¨oder, E.; Langreth, D. C.; Lundqvist, B. I. Van der Waals +density functional for general geometries. Phys. Rev. Lett. 2004, 92, 246401. +(36) Thonhauser, T.; Cooper, V. R.; Li, S.; Puzder, A.; Hyldgaard, P.; Langreth, D. C. Van +21 + +der Waals density functional: Self-consistent potential and the nature of the van der +Waals bond. Phys. Rev. B 2007, 76, 125112. +(37) Rom´an-P´erez, G.; Soler, J. M. Efficient implementation of a van der Waals density +functional: application to double-wall carbon nanotubes. Phys. Rev. Lett. 2009, 103, +096102. +(38) Tromer, R. M.; da Luz, M. G.; Ferreira, M. S.; Pereira, L. F. C. Atomic adsorption on +nitrogenated holey graphene. J. Phys. Chem. C 2017, 121, 3055–3061. +(39) Qin, B.; Ma, H.; Hossain, M.; Zhong, M.; Xia, Q.; Li, B.; Duan, X. Substrates in the +Synthesis of Two-Dimensional Materials via Chemical Vapor Deposition. Chem. Mater. +2020, +(40) Han, S.; Moore, R. A.; Viola, R. E. Synthesis and Evaluation of Alternative Substrates +for Arginasease. Bioorg. Chem. 2002, 30, 81–94. +(41) Neto, J. S.; Zeni, G. Transition Metal-Catalyzed and Metal-Free Cyclization Reac- +tions of Alkynes with Nitrogen-Containing Substrates: Synthesis of Pyrrole Derivatives. +ChemCatChem 2020, 12, 3335–3408. +(42) Zhen, Z.; Li, X.; Zhu, H. Synthesis of two dimensional materials on extremely clean +surfaces. Nano Today 2018, 22, 7–9. +(43) Lewis, R.; Gomer, R. Adsorption of hydrogen on platinum. Surf. Sci. 1969, 17, 333– +345. +(44) Kistanov, A. A.; Cai, Y.; Kripalani, D. R.; Zhou, K.; Dmitriev, S. V.; Zhang, Y.-W. +A first-principles study on the adsorption of small molecules on antimonene: oxidation +tendency and stability. J. Mater. Chem. C 2018, 6, 4308–4317. +22 + +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +-4 +-3.5 +-3 +-2.5 +-2 +-1.5 +-1 +Ead (eV) +no strain +3% along X +6% along X +9% along X +Sc +Ti +V +Cr +Mn +Fe +Co +Ni +-125 +-100 +-75 +-50 +Ead (meV) +no strain +3% along X +6% along X +9% along X +-10 +-5 +0 +5 +10 +arb. +spin - up +spin - down +-10 +-5 +0 +5 +10 +arb. +-10 +-5 +0 +5 +10 +arb. +-10 +-5 +0 +5 +10 +arb. +-2 +-1 +0 +1 +2 +E-EFermi (eV) +-10 +-5 +0 +5 +10 +arb. +-2 +-1 +0 +E-EFermi (eV) +Sc +Ti +V +Cr +Mn +µ=0.0 µB +µ=0.0 µB +µ=2.6 µB +µ=2.7 µB +µ=3.1 µB +µ +This figure "supercell_new.png" is available in "png"� format from: +http://arxiv.org/ps/2301.11466v1 + diff --git a/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/2301.01076v1.pdf.txt b/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/2301.01076v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e82424c30c1711947e36e0eecac9b00849aaa831 --- /dev/null +++ b/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/2301.01076v1.pdf.txt @@ -0,0 +1,1636 @@ +arXiv:2301.01076v1 [math.ST] 3 Jan 2023 +LEAST PRODUCT RELATIVE ERROR +ESTIMATION FOR FUNCTIONAL +MULTIPLICATIVE MODEL AND +OPTIMAL SUBSAMPLING +Qian Yan, Hanyu Li∗ +Chongqing University +Abstract: In this paper, we study the functional linear multiplicative model based +on the least product relative error criterion. Under some regularization condi- +tions, we establish the consistency and asymptotic normality of the estimator. +Further, we investigate the optimal subsampling for this model with massive +data. Both the consistency and the asymptotic distribution of the subsampling +estimator are first derived. +Then, we obtain the optimal subsampling proba- +bilities based on the A-optimality criterion. +Moreover, the useful alternative +subsampling probabilities without computing the inverse of the Hessian matrix +are also proposed, which are easier to implement in practise. Finally, numeri- +cal studies and real data analysis are done to evaluate the performance of the +∗Corresponding +author: +Hanyu +Li, +College +of +Mathematics +and +Statistics, +Chongqing University, Chongqing, 401331, P.R. China. E-mail: lihy.hy@gmail.com or +hyli@cqu.edu.cn. + +proposed approaches. +Key words and phrases: Asymptotic normality, functional multiplicative model, +least product relative error, massive data, optimal subsampling +1. +Introduction +In the era of big data, data can be collected and recorded on a dense sample +of observations in time and space. These observations are of a functional +nature and typically take the form of curves and images. Functional data +analysis has been shown to perform wonderfully well with these datasets. +Functional regression models with scalar response have been extensively +studied and the most popular one is the functional linear model. +We consider a scalar-on-function linear multiplicative model +y = exp +�� 1 +0 +X(t)β(t)dt +� +ǫ, +(1.1) +where the covariate X(t) and slope β(t) are smooth and square integrable +functions defined on [0, 1], y is the scalar response variable, and ǫ is the +random error. Moreover, both y and ǫ are strictly positive. By taking the +logarithmic transformation, the model (1.1) becomes the regular functional +linear model. However, in comparison, the multiplicative model is more +useful and flexible to handle positive responses such as incomes, stock prices +and survival times. + +As we know, to estimate the slope, absolute errors are the most pop- +ular choices for designing loss functions, such as the least squares (LS) +and the least absolute deviation (LAD). However, in practical applications, +loss functions based on relative errors may be more effective and suitable. +There are two types of relative errors, relative to the target value y and +relative to the prediction of y. +Chen et al. (2010) summed the two rel- +ative errors and proposed the least absolute relative error (LARE) crite- +rion for scalar linear multiplicative model. However, the LARE criterion +is non-smooth, which makes calculating it a little complicated. Later, by +multiplying the two relative errors, Chen et al. (2016) improved it and pre- +sented the least product relative error (LPRE) criterion. The LPRE cri- +terion is infinitely differentiable and strictly convex, resulting in a sim- +ple and unique estimator. +Moreover, they also proved that the LPRE +estimation is more effective than the LARE, LAD, and LS estimations +under some certain conditions. +As a result, this criterion has also been +widely used in other scalar multiplicative models (Chen and Liu (2021); +Chen, Liu and Ma (2022); Ming, Liu and Yang (2022)). +For functional multiplicative models, to the best of our knowledge, there +are only a few works and all of them focus on the LARE criterion. For +example, Zhang, Zhang and Li (2016) extended the LARE criterion to the + +functional model for the first time. They developed the functional quadratic +multiplicative model and derived the asymptotic properties of the estima- +tor. Later, Zhang et al. (2019) and Fan, Zhang and Wu (2022) considered +the variable selection for partially and locally sparse functional linear mul- +tiplicative models based on the LARE criterion, respectively. It seems that +there is no study on the LPRE criterion conducted for functional data. To +fill the gap, we propose the LPRE criterion for the functional linear mul- +tiplicative model, and derive the consistency and asymptotic normality of +the estimator. +Considering that traditional techniques are no longer usable for massive +data due to the limitation of computational resources, several researchers +have devoted to developing efficient or optimal subsampling strategies for +statistical models with massive data. For example, for linear model, Ma, Mahoney and Yu +(2015) studied the biases and variance of the algorithmic leveraging esti- +mator. Ma et al. (2020) further provided the asymptotic distributions of the +RandNLA subsampling∗ estimators. For logistic regression, Wang, Zhu and Ma +(2018) proposed an optimal subsampling method based on some optimality +criteria (Atkinson, Donev and Tobias (2007)). Subsequently, Wang (2019) +∗The probabilities of this kind sampling have close relationship with leverage values, +which are typically used to devise randomized algorithms in numerical linear algebra. + +proposed a more efficient estimation method and Poisson subsampling to +improve the estimation and computation efficiency. Later, Yao and Wang +(2019), Yu, Wang and Ai (2020) and Ai et al. (2021b) extended the opti- +mal subsampling method to softmax regression, quasi-likelihood and gen- +eralized linear models, respectively. +Furthermore, considering the effect +of heavy-tailed errors or outliers in responses, some scholars have investi- +gated more robust models. For example, Wang and Ma (2021), Ai et al. +(2021a), Fan, Liu and Zhu (2021), and Shao, Song and Zhou (2022) em- +ployed the optimal subsampling method in ordinary quantile regression, +and Shao and Wang (2021) and Yuan et al. (2022) developed the subsam- +pling for composite quantile regression. Very recently, Ren, Zhao and Wang +(2022) considered the optimal subsampling strategy based on the LARE +criterion in linear multiplicative model. They derived the asymptotic dis- +tribution of the subsampling estimator and proved that LARE outperforms +LS and LAD under the optimal subsampling strategy. Wang and Zhang +(2022) further extended the optimal subsampling to linear multiplicative +model based on the LPRE criterion. +For functional regression models, now only little work has been done +in the area of subsampling (Liu, You and Cao (2021); He and Yan (2022); +Yan, Li and Niu (2022)). Specifically, He and Yan (2022) proposed a func- + +tional principal subspace sampling probability for functional linear regres- +sion with scalar response, which eliminates the impact of eigenvalue in- +side the functional principal subspace and properly weights the residuals. +Liu, You and Cao (2021) and Yan, Li and Niu (2022) extended the optimal +subsampling method to functional generalized linear models and functional +quantile regression with scalar response, respectively. +Inspired by the above works, we further study the optimal subsampling +for functional linear multiplicative model based on the LPRE criterion, and +first establish the consistency and asymptotic normality of the subsampling +estimator. Then, the optimal subsampling probabilities are obtained by +minimizing the asymptotic integrated mean squared error (IMSE) under +the A-optimality criterion. In addition, a useful alternative minimization +criterion is also proposed to further reduce the computational cost. +The rest of this paper is organized as follows. +Section 2 introduces +the functional linear multiplicative model based on the LPRE criterion and +investigates the asymptotic properties of the estimator. In Section 3, we +present the asymptotic properties of the subsampling estimator and the +optimal subsampling probabilities. The modified version of these probabil- +ities is also considered in this section. Section 4 and Section 5 illustrate our +methodology through numerical simulations and real data, respectively. + +2. +LPRE estimation +2.1 +Estimation +Suppose that {(xi(t), yi), i = 1, 2, . . . , n} are samples from the model (1.1) +with the independent and identical distribution. +The functional LPRE +estimator for the model (1.1), says ˆβ(t), is established by +arg inf +β +n +� +i=1 + + + +������ +yi − exp +�� 1 +0 xi(t)β(t)dt +� +yi +������ +× +������ +yi − exp +�� 1 +0 xi(t)β(t)dt +� +exp +�� 1 +0 xi(t)β(t)dt +� +������ + + + , +which is equivalent to +arg inf +β +n +� +i=1 +� +yi exp +� +− +� 1 +0 +xi(t)β(t)dt +� ++ y−1 +i +exp +�� 1 +0 +xi(t)β(t)dt +� +− 2 +� +. +We aim to estimate the slope function β(t) via a penalized spline +method. +Define K equispaced interior knots as 0 = t0 < t1 < . . . < +tK < tK+1 = 1. +Let B(t) = (B1(t), B2(t), . . . , BK+p+1(t))T be the set +of the normalized B-spline basis functions of degree p on each sub-interval +[tj, tj+1], j = 0, 1, . . . , K and p−1 times continuously differentiable on [0, 1]. +The details of the B-spline functions can be found in de Boor (2001). Our +functional LPRE estimator ˆβ(t) of β(t) is thus defined as +ˆβ(t) = +K+p+1 +� +j=1 +ˆθjBj(t) = BT(t)ˆθfull, + +2.1 +Estimation +where ˆθfull minimizes the penalized functional LPRE loss function +L(θ; λ, K) += +n +� +i=1 +� +yi exp +� +− +� 1 +0 +xi(t)BT(t)θdt +� ++ y−1 +i +exp +�� 1 +0 +xi(t)BT(t)θdt +� +− 2 +� ++ λ +2 +� 1 +0 +�� +B(q)(t) +�T +θ +�2 +dt, +(2.2) +where λ > 0 is the smoothing parameter, and B(q)(t) is the square in- +tegrated q-th order derivative of all the B-splines functions for some in- +teger q ≤ p. +For convenience, we let Bi = +� 1 +0 xi(t)B(t)dt and Dq = +� 1 +0 B(q)(t){B(q)(t)}Tdt, the loss function (2.2) thus can be rewritten as +L(θ; λ, K) = +n +� +i=1 +� +ωi(θ) + ωi(θ)−1 − 2 +� ++ λ +2θTDqθ, +(2.3) +where ωi(θ) = yi exp +� +−BT +i θ +� +. Of note, the model (2.3) is infinitely dif- +ferentiable and strictly convex. The Newton-Raphson method will be used +since there is no general closed-form solution to the functional LPRE esti- +mator. That is, the estimator ˆθfull can be obtained by iteratively applying +the following formula until ˆθt+1 converges. +ˆθt+1 =ˆθt − +� n +� +i=1 +� +ωi(ˆθt) + ωi(ˆθt)−1� +BiBT +i + λDq +�−1 +× +� n +� +i=1 +� +−ωi(ˆθt) + ωi(ˆθt)−1� +Bi + λDqˆθt +� +. +Note that the computational complexity for calculating ˆθfull is about +O(ζn(K + p + 1)2), where ζ is the number of iterations until convergence. + +2.2 +Theoretical properties of ˆβ(t) +As we can see, the computational cost is expensive when the full data size +n is very large. To deal with this issue, we will propose a subsampling +algorithm to reduce computational cost in Section 3. +2.2 +Theoretical properties of ˆβ(t) +We will show the consistency and asymptotic normality of ˆβ(t). For sim- +plicity, the following notations are given firstly. +For the function f(t) +belonging to Banach space, ∥f∥m = ( +� 1 +0 |f(t)|mdt)1/m for 0 < m < ∞. +For the matrix A = (aij), ∥A∥∞ = maxij |aij|. In addition, define H = +E{BBT(ǫ + ǫ−1)} + λ/nDq, G = E{BBT(ǫ − ǫ−1)2}, and +ˆG = 1 +n +n +� +i=1 +� +−ωi(ˆθfull) + ωi(ˆθfull)−1�2 +BiBT +i , +ˆH = 1 +n +n +� +i=1 +� +ωi(ˆθfull) + ωi(ˆθfull)−1� +BiBT +i + λ/nDq. +(2.4) +Furthermore, we assume the following regularization conditions hold. +(H.1): For the functional covariate X(t), assume that E(∥X∥8 +8) < ∞. +(H.2): Assume the unknown functional coefficient β(t) is sufficiently smooth. +That is, β(t) has a d′-th derivative β(d′)(t) such that +| β(d′)(t) − β(d′)(s) |≤ C2 | t − s |v, +t, s ∈ [0, 1], +where the constant C2 > 0 and v ∈ [0, 1]. In what follows, we set +d = d′ + v ≥ p + 1. + +2.2 +Theoretical properties of ˆβ(t) +(H.3): E{(ǫ − ǫ−1) | X} = 0. +(H.4): E{(ǫ + ǫ−1)6 | X} < ∞. +(H.5): Assume the smoothing parameter λ satisfies λ = o(n1/2K1/2−2q) +with q ≤ p. +(H.6): Assume the number of knots K = o(n1/2) and K/n1/(2d+1) → ∞ as +n → ∞. +Remark 1. Assumptions (H.1) and (H.2) are quite usual in the functional +setting (see e.g., Cardot, Ferraty and Sarda (2003); Claeskens, Krivobokova and Opsomer +(2009)). Assumption (H.3) is an identifiability condition for the LPRE es- +timation of β(t). Assumptions (H.3) and (H.4) ensure the consistency and +asymptotic normality of the LPRE estimator. Assumptions (H.5) and (H.6) +are mainly used to obtain the asymptotic unbiasedness of the LPRE esti- +mator. +Now, we present the consistency and asymptotic normality of ˆβ(t). +Theorem 1. Under Assumptions (H.1)–(H.6), for t ∈ [0, 1], as n → ∞, +we have +(1): (Consistency) There exists a LPRE estimator ˆβ(t) such that +∥ˆβ − β∥2 = OP(n−1/2K1/2); + +(2): (Asymptotic normality) +{B(t)TV fullB(t)}−1/2� +n/K(ˆβ(t) − β(t)) → N(0, 1) +in distribution, where V full = K−1H−1GH−1, which is consistently +estimated by K−1 ˆH +−1 ˆG ˆH +−1 defined in (2.4). +3. +Optimal subsampling +3.1 +Subsampling estimator and its theoretical properties +We first introduce a general random subsampling algorithm for the func- +tional linear multiplicative model, in which the subsamples are taken at +random with replacement based on some sampling distributions. +1. Sampling. Given a larger K, we generate Bi = +� 1 +0 xi(t)B(t)dt and +the new data is {(Bi, yi), i = 1, 2, . . . , n}. +Assign the subsampling +probabilities {πi}n +i=1 to all data points and draw a random subsample +of size r(≪ n) with replacement based on {πi}n +i=1 from the new data. +Denote the subsample as {(Bi, yi, Ri), i = 1, 2, . . . , n}, where Ri de- +notes the total number of times that the i-th data point is selected +from the full data and �n +i=1 Ri = r. +2. Estimation. Given λ, minimize the following loss function to get the + +3.1 +Subsampling estimator and its theoretical properties +estimate ˜θ based on the subsample, +L∗(θ; λ, K) = 1 +r +n +� +i=1 +Ri +πi +� +wi(˜θt) + wi(˜θt)−1 − 2 +� ++ λ +2θTDqθ. (3.5) +Due to the convexity of L∗(θ; λ, K), the Newton-Raphson method is +adopted until ˜θt+1 and ˜θt are close enough, +˜θt+1 =˜θt − +� n +� +i=1 +Ri +πi +� +ωi(˜θt) + ωi(˜θt)−1� +BiBT +i + λDq +�−1 +× +� n +� +i=1 +Ri +πi +� +−ωi(˜θt) + ωi(˜θt)−1� +Bi + λDq˜θt +� +. +(3.6) +Finally, we can get the subsample estimator ˜β(t) = BT(t)˜θ. +The loss function (3.5) is guaranteed to be unbiased in cases when we use an +inverse probability weighted technique since the subsampling probabilities +πi may depend on the full data Fn = {(xi(t), yi), i = 1, 2, . . . , n, t ∈ [0, 1]}. +Below we establish the consistency and asymptotic normality of ˜β(t) to- +wards ˆβ(t). An extra condition is needed. +(H.7): Assume that max1≤i≤n r(nπi)−1 = OP(1) and r = o(K2). +Remark 2. Assumption (H.7) is often used in inverse probability weighted +algorithms to restrict the weights such that the loss function is not exces- +sively inflated by data points with extremely small subsampling probabili- +ties (Ai et al. (2021b); Liu, You and Cao (2021); Yan, Li and Niu (2022)). + +3.2 +Optimal subsampling probabilities +Theorem 2. Under Assumptions (H.1)–(H.7), for t ∈ [0, 1], as r, n → ∞, +conditionally on Fn in probability, we have +(1): (Consistency) There exists a subsampling estimator ˜β(t) such that +∥˜β − ˆβ∥2 = OP |Fn(r−1/2K1/2); +(2): (Asymptotic normality) +� +B(t)TV B(t) +�−1/2 � +r/K(˜β(t) − ˆβ(t)) → N(0, 1) +in distribution, where +V = 1 +K +ˆH +−1V π ˆH +−1, +V π = 1 +n2 +n +� +i=1 +1 +πi +� +−ωi(ˆθfull) + ωi(ˆθfull)−1�2 +BiBT +i . +(3.7) +3.2 +Optimal subsampling probabilities +To better approximate ˆβ(t), it is important to choose the proper subsam- +pling probabilities. A commonly used criterion is to minimize the asymp- +totic IMSE of ˜β(t). By Theorem 2, we have the asymptotic IMSE of ˜β(t) +as follows +IMSE(˜β(t) − ˆβ(t)) = K +r +� 1 +0 +BT(t)V B(t)dt. +(3.8) +Note that V defined in (3.7) is the asymptotic variance-covariance ma- +trix of +� +r/K(˜θ − ˆθfull) and the integral inequality +� 1 +0 BT(t)V B(t)dt ≤ + +3.2 +Optimal subsampling probabilities +� 1 +0 BT(t)V ′B(t)dt holds if and only if V +≤ V ′ holds in the L¨owner- +ordering sense. Thus, we focus on minimizing the asymptotic MSE of ˜θ +and choose the subsampling probabilities such that tr(V ) is minimized. +This is called the A-optimality criterion in optimal experimental designs; +see e.g., Atkinson, Donev and Tobias (2007). Using this criterion, we are +able to derive the optimal subsampling probabilities provided in the follow- +ing theorem. +Theorem 3 (A-optimality). If the subsampling probabilities πi, i = 1, 2, . . . , n, +are chosen as +πF Aopt +i += +|−yi exp(−BT +i ˆθfull) + y−1 +i +exp(BT +i ˆθfull)|∥ ˆH +−1Bi∥2 +�n +i=1|−yi exp(−BT +i ˆθfull) + y−1 +i +exp(BT +i ˆθfull)|∥ ˆH +−1Bi∥2 +, (3.9) +then the total asymptotic MSE of +� +r/K(˜θ − ˆθfull), tr(V ), attains its min- +imum, and so does the asymptotic IMSE of ˜β(t). +However, from (2.4), we have that ˆH requires the chosen of smoothing +parameter λ, and the calculation of ∥ ˆH +−1Bi∥2 costs O(n(K+p+1)2), which +is expensive. These weaknesses make these optimal subsampling probabil- +ities not suitable for practical use. So, it is necessary to find alternative +probabilities without ˆH to reduce the computational complexity. +Note that, as observed in (3.7), only V π involves πi in the asymptotic +variance-covariance matrix V . Thus, from the L¨owner-ordering, we can + +3.2 +Optimal subsampling probabilities +only focus on V π and minimize its trace, which can be interpreted as min- +imizing the asymptotic MSE of +� +r/K ˆH(˜θ − ˆθfull) due to its asymptotic +unbiasedness. This is called the L-optimality criterion in optimal experi- +mental designs (Atkinson, Donev and Tobias (2007)). Therefore, to reduce +the computing time, we consider the modified optimal criterion: minimizing +tr(V π). +Theorem 4 (L-optimality). If the subsampling probabilities πi, i = 1, 2, . . . , n, +are chosen as +πF Lopt +i += +|−yi exp(−BT +i ˆθfull) + y−1 +i +exp(BT +i ˆθfull)|∥Bi∥2 +�n +i=1|−yi exp(−BT +i ˆθfull) + y−1 +i +exp(BT +i ˆθfull)|∥Bi∥2 +, +(3.10) +then tr(V π) attains its minimum. +From (3.10), it is seen that the functional L-optimal subsampling prob- +abilities πF Lopt +i +requires O(n(K + p + 1)) flops to compute, which is much +cheaper than computing πF Aopt +i +as K increases. +Consider that the subsampling probabilities (3.10) depend on ˆθfull, +which is the full data estimation to be estimated, so an exact probability +distribution is not applicable directly. Next, we consider an approximate +one and propose a two-step algorithm. +1. Step 1: Draw a small subsample of size r0 to obtain a pilot esti- +mator ˜θpilot by running the general subsampling algorithm with the + +3.3 +Tuning parameters selection +uniform sampling probabilities π0 +i = 1/n and λ = 0. Replace ˆθfull +with ˜θpilot in (3.10) to derive the approximation of the optimal sub- +sampling probabilities. +2. Step 2: Draw a subsample of size r by using the approximate optimal +probabilities from Step 1. Given λ, obtain the estimate ˘θ(λ) with +the subsample by using (3.6), and the λ is determined by minimizing +BIC(λ) discussed below based on the corresponding subsample. Once +the optimal λ is determined, we can get the estimator ˘β(t) = BT(t)˘θ. +3.3 +Tuning parameters selection +For the degree p and the order of derivation q, we empirically choose B- +splines of degree 3 and a second-order penalty. The number of knots K is +not a crucial parameter because smoothing is controlled by the roughness +penalty parameter λ (see e.g., Ruppert (2002); Cardot, Ferraty and Sarda +(2003)). For the parameter λ, we choose the BIC criterion to determine it: +BIC(λ) = log(RSS) + log(n) +n +df, +where RSS = 1/n �n +i=1{ωi(ˆθfull) + ωi(ˆθfull)−1 − 2}, and df denotes the +effective degrees of freedom, i.e., the number of non-zero parameter esti- +mates. However, using full data to select the optimal λ is computationally +expensive, we approximate it by BIC under the optimal subsample data. + +4. +Simulation studies +In this section, we aim to study the finite sample performance of the pro- +posed methods by using synthetic data. +4.1 +LPRE performance +In this experiment, we shall compare the performance of the functional least +square (FLS), functional least absolute deviation (FLAD) and functional +least product relative error (FLPRE). The FLS and FLAD estimates are +defined as minimizing �n +i=1[log(yi) − +� 1 +0 xi(t)β(t)dt]2 and �n +i=1|log(yi) − +� 1 +0 xi(t)β(t)dt|, respectively. The functional covariates in the model (1.1) +are identically and independently generated as: xi(t) = � aijBj(t), i = +1, 2, . . . , n, where Bj(t) are cubic B-spline basis functions that are sampled +at 100 equally spaced points between 0 and 1. We consider the following +two different distributions for the basis coefficient A = (aij): +• C1. Multivariate normal distribution N(0, Σ), where Σij = 0.5|i−j|; +• C2. Multivariate t distribution with 5 degrees of freedom, t5(0, Σ/10). +The slope function β(t) = 7t3 + 2 sin(4πt + 0.2) and the random errors, ǫi, +are generated in four cases: +• R1. log(ǫ) ∼ N(0, 1); + +4.1 +LPRE performance +• R2. log(ǫ) ∼ U(−2, 2); +• R3. ǫ has the distribution with the density function f(x) = c exp(−x− +x−1 − log(x) + 2)I(x > 0) and c is a normalization constant; +• R4. ǫ ∼ U(0.5, b) with b being chosen such that E(ǫ) = E(1/ǫ). +In the specific simulation, we first take n = 100, 500, 1000 for train- +ing, and then n = 300, 1500, 3000 for testing, and let the number of knots +K = 10. Based on 500 replications, we use the root IMSE to evaluate the +qualities of estimates and assess the performances of prediction on test data +by the root predicted square error (RPSE), respectively. They are defined +as follows: +IMSE = +1 +500 +500 +� +k=1 +�� 1 +0 +� +ˆβ(k)(t) − β(t) +�2 +dt +�1/2 +, +and +RPSE = +1 +500 +500 +� +k=1 +� +1 +n +n +� +i=1 +�� 1 +0 +xi(t)β(t)dt − +� 1 +0 +xi(t)ˆβ(k)(t)dt +�2�1/2 +, +where ˜β(k)(t) is the estimator from the k-th run. +The simulation results are presented in Tables 1 and 2, which show +that FLPRE performs considerably better than FLS and LAD in all cases +except the one C2-R1. In case C2-R1, FLPRE always outperforms FLAD, +while the gap between LPRE and FLS gradually decreases as the sample + +4.2 +Subsampling performance +size increases, and LPRE slightly outperforms FLS when the sample size +reaches 1000. In addition, the IMSE and RPSE of all estimators decrease +as the sample size is increasing, which implies that the performance of all +estimators becomes better when the sample size enlarges. +4.2 +Subsampling performance +In this experiment, we first take n = 105 for training, and then m = 1000 +for testing to compare the performance of the functional L-optimal sub- +sampling (FLopt) method with the uniform subsampling (Unif) method. +The simulated data distributions are the same as those in Subsection 4.1, +to which we add a case about the basis coefficient. +• C3. A mixture of two multivariate normal distributions 0.5N(1, Σ)+ +0.5N(−1, Σ). +In addition, from Assumption (H.6), we let the number of knots K = ⌈n1/4⌉. +For fair comparison, we use the same basis functions and the same +smoothing parameters in all cases as those for full data. The root IMSE and +RPSE of the subsampling estimators corresponding to various subsampling +sizes of 2000,5000,8000,10000,15000 with r0 = 1000 are computed, where + +4.2 +Subsampling performance +Table 1: IMSE of each estimator. +Dist +Method +R1 +R2 +R3 +R4 +n = 100/300 +C1 +FLPRE +1.4958 +1.4588 +1.2258 +1.0766 +FLS +1.5023 +1.6294 +1.3061 +1.1707 +FLAD +1.6797 +2.0704 +1.4343 +1.2370 +C2 +FLPRE +2.7053 +2.5396 +1.9177 +1.4743 +FLS +2.4989 +2.7184 +1.9261 +1.5298 +FLAD +2.8300 +3.8908 +2.2250 +1.7391 +n = 500/1500 +C1 +FLPRE +0.7550 +0.7023 +0.6086 +0.5447 +FLS +0.8890 +0.9575 +0.7952 +0.7321 +FLAD +1.1018 +1.3702 +0.9422 +0.7867 +C2 +FLPRE +1.5923 +1.4627 +1.1942 +1.0242 +FLS +1.5522 +1.6664 +1.2809 +1.1278 +FLAD +1.7679 +2.2696 +1.4302 +1.2366 +n = 1000/3000 +C1 +FLPRE +0.5508 +0.4930 +0.3879 +0.3255 +FLS +0.6213 +0.6600 +0.5286 +0.4834 +FLAD +0.8532 +1.0904 +0.6803 +0.5346 +C2 +FLPRE +1.2123 +1.0902 +0.9203 +0.7943 +FLS +1.2552 +1.3307 +1.0612 +0.9450 +FLAD +1.4475 +1.8175 +1.2096 +1.0333 + +4.2 +Subsampling performance +Table 2: RPSE of each prediction. +Dist +Method +R1 +R2 +R3 +R4 +n = 100/300 +C1 +FLPRE +0.2488 +0.2416 +0.2022 +0.1770 +FLS +0.2516 +0.2722 +0.2170 +0.1940 +FLAD +0.2828 +0.3486 +0.2394 +0.2054 +C2 +FLPRE +0.1869 +0.1749 +0.1323 +0.1015 +FLS +0.1731 +0.1882 +0.1331 +0.1053 +FLAD +0.1967 +0.2696 +0.1536 +0.1201 +n = 500/1500 +C1 +FLPRE +0.1240 +0.1151 +0.0983 +0.0872 +FLS +0.1455 +0.1569 +0.1288 +0.1181 +FLAD +0.1814 +0.2276 +0.1537 +0.1274 +C2 +FLPRE +0.1078 +0.0989 +0.0807 +0.0686 +FLS +0.1055 +0.1135 +0.0872 +0.0761 +FLAD +0.1208 +0.1559 +0.0978 +0.0837 +n = 1000/3000 +C1 +FLPRE +0.0902 +0.0805 +0.0625 +0.0514 +FLS +0.1009 +0.1083 +0.0848 +0.0767 +FLAD +0.1390 +0.1792 +0.1098 +0.0855 +C2 +FLPRE +0.0819 +0.0734 +0.0613 +0.0525 +FLS +0.0847 +0.0899 +0.0710 +0.0630 +FLAD +0.0985 +0.1243 +0.0815 +0.0691 + +4.2 +Subsampling performance +the definitions of root IMSE and RPSE are as follows +IMSE = +1 +1000 +1000 +� +k=1 +�� 1 +0 +� +˜β(k)(t) − ˆβ(t) +�2 +dt +�1/2 +, +RPSE = +1 +1000 +1000 +� +k=1 +� +1 +m +m +� +i=1 +�� 1 +0 +xi(t)˜β(k)(t)dt − +� 1 +0 +xi(t)ˆβ(t)dt +�2�1/2 +. +(4.11) +Based on 1000 replications, all results are shown in Figures 1 and 2 by +logarithmic transformation. +From Figure 1, it is clear to see that the FLopt subsampling method +always has smaller IMSE compared with the Unif subsampling method for +all cases, which is in agreement with the theoretical results. That is, the +former can minimize the asymptotic IMSE of the subsampling estimator. In +particular, the FLopt method performs much better when the errors obey +case R1. From Figure 2, same as the case on IMSE, we can see that the +RPSE of the FLopt subsampling estimator is always better than that of the +Unif subsampling estimator for all cases. Furthermore, Figures 1 and 2 also +show that the FLopt method depends on both types of random errors and +covariates, and the effect of errors is greater than that of covariates. Besides, +as expected, the estimation and prediction efficiency of the subsampling +estimators is getting better as the subsample size increases. +To evaluate the computational efficiency of the subsampling methods, + +4.2 +Subsampling performance +−0.8 +−0.6 +−0.4 +−0.2 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C1−R1 +−0.5 +−0.4 +−0.3 +−0.2 +−0.1 +0.0 +0.1 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C2−R1 +−0.8 +−0.6 +−0.4 +−0.2 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C3−R1 +−0.8 +−0.7 +−0.6 +−0.5 +−0.4 +−0.3 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C1−R2 +−0.4 +−0.3 +−0.2 +−0.1 +0.0 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C2−R2 +−0.9 +−0.8 +−0.7 +−0.6 +−0.5 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C3−R2 +−0.9 +−0.8 +−0.7 +−0.6 +−0.5 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C1−R3 +−0.6 +−0.5 +−0.4 +−0.3 +−0.2 +−0.1 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C2−R3 +−0.9 +−0.8 +−0.7 +−0.6 +−0.5 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C3−R3 +−1.1 +−1.0 +−0.9 +−0.8 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C1−R4 +−0.8 +−0.7 +−0.6 +−0.5 +−0.4 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C2−R4 +−1.4 +−1.3 +−1.2 +−1.1 +−1.0 +4000 +8000 +12000 +r +IMSE +Method +FLopt +Unif +C3−R4 +Figure 1: IMSE for different subsampling sizes r and fixed first step sub- +sampling size r0 = 1000 with different distributions when n = 105. + +4.2 +Subsampling performance +−1.6 +−1.4 +−1.2 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C1−R1 +−1.6 +−1.4 +−1.2 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C2−R1 +−1.6 +−1.4 +−1.2 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C3−R1 +−1.6 +−1.5 +−1.4 +−1.3 +−1.2 +−1.1 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C1−R2 +−1.6 +−1.5 +−1.4 +−1.3 +−1.2 +−1.1 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C2−R2 +−1.7 +−1.6 +−1.5 +−1.4 +−1.3 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C3−R2 +−1.7 +−1.6 +−1.5 +−1.4 +−1.3 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C1−R3 +−1.7 +−1.6 +−1.5 +−1.4 +−1.3 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C2−R3 +−1.7 +−1.6 +−1.5 +−1.4 +−1.3 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C3−R3 +−1.9 +−1.8 +−1.7 +−1.6 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C1−R4 +−2.0 +−1.9 +−1.8 +−1.7 +−1.6 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C2−R4 +−2.2 +−2.1 +−2.0 +−1.9 +−1.8 +4000 +8000 +12000 +r +RPSE +Method +FLopt +Unif +C3−R4 +Figure 2: RPSE for different subsampling sizes r and fixed first step sub- +sampling size r0 = 1000 with different distributions when n = 105. + +4.2 +Subsampling performance +Table 3: CPU seconds for different subsampling sizes r with n = 106, +r0 = 200, K = 50 and a fixed λ. The times are the mean times calculated +from 100 implementations of each method. +Method +r +1000 +2000 +3000 +4000 +5000 +FLopt +0.6036 +0.6101 +0.6230 +0.6298 +0.6393 +Unif +0.0142 +0.0250 +0.0355 +0.0455 +0.0554 +Full data CPU seconds: 11.8155 +we record the computing time of each method used in the case C1-R1 on a +PC with an Intel I5 processor and 8GB memory using R, where the time +required to generate the data is not included. We set n = 106, r0 = 200 +and enlarge the number of knots for spline function to K = 50. +Each +subsampling strategy is evaluated 100 times. The results on different r with +a fixed λ for the FLopt and Unif subsampling methods are given in Table +3. +It is clear that subsampling can significantly improve computational +efficiency compared with full data, and the FLopt method is more expensive +than the Unif method as expected. + +5. +Real data analysis +5.1 +Tecator data +Tecator data is available in fda.usc package, which has 215 meat samples. +For each sample, the data consists of 100 channel spectrum of absorbance +and the contents of fat, water and protein which are measured in percent. +The 100 channel spectrum measured over the wavelength range 850-1050nm +provides a dense measurement spaced 2nm apart that can be considered +functional data. Figure 3 shows 50 randomly selected curves of spectrum of +absorbance and the histogram of the content of protein. In this experiment, +we shall study protein content by our proposed FLPRE. +2 +3 +4 +850 +900 +950 +1000 +1050 +Wavelength +absorbance +0 +10 +20 +30 +40 +10.0 +12.5 +15.0 +17.5 +20.0 +22.5 +Content of Protein +Frequency +Figure 3: Left subfigure: A random subset of 50 spectrometric curves. +Right subfigure: Histogram of the content of protein. + +5.2 +Beijing multi-site air-quality data +Table 4: MAPE and MPPE for Tecator data. +Criterion +FLS +FLAD +FLPRE +MAPE +3.5836 +3.7452 +3.5420 +MPPE +0.0745 +0.0739 +0.0727 +We employ the first 160 observations to fit the model, and then apply +the remaining samples to evaluate the prediction efficiency by the mean of +absolute prediction errors (MAPE) and product relative prediction errors +(MPPE). The two mean criteria are measured by: +MAPE = 1 +55 +215 +� +i=161 +|yi − ˆyi|; +MPPE = 1 +55 +215 +� +i=161 +(yi − ˆyi)2/(yiˆyi), +where ˆyi = exp( +� 1 +0 xi(t)ˆβ(t)dt). All results are presented in Table 4 which +illustrates that the proposed FLPRE outperforms FLS and FLAD for pre- +dicting the protein content. +5.2 +Beijing multi-site air-quality data +This data set is available in https://archive-beta.ics.uci.edu/ml/datasets/beijing+multi+site+air+quality+data, +and consists of hourly air pollutants data from 12 nationally controlled air- +quality monitoring sites in Beijing from March 1, 2013 to February 28, 2017. +Our primary interest here is to predict the maximum of daily PM10 con- + +5.2 +Beijing multi-site air-quality data +Table 5: MAPE and MPPE for the air-quality data. +Criterion +FLS +FLAD +FLPRE +MAPE +7.9902 +8.7580 +7.6727 +MPPE +0.5031 +0.5223 +0.4970 +centrations (µg/m3) using the PM10 trajectory (24 hours) of the last day. +We delete all missing values and obtain a sample of 15573 days’ complete +records. We take the top 80% of the sample as the training set and the rest +as the test set. The raw observations after the square-root transformation +are first transformed into functional data using 15 Fourier basis functions. +This transformation can be implemented with the Data2fd function in the +fda package, suggested in Sang and Cao (2020). A random subset of 100 +curves of 24-hourly PM10 concentrations is presented in the left panel of +Figure 4, where the time scale has been transformed to [0, 1]. The right +panel of Figure 4 depicts the histogram of the maximal values of intraday +PM10 concentrations. +We assess the performances of prediction by the MAPE and MPPE +criteria. Table 5 illustrates that the proposed FLPRE outperforms FLS +and FLAD for predicting the PM10 concentrations. +Further, we calculate the IMSE and RPSE using (4.11) and compare + +5.2 +Beijing multi-site air-quality data +10 +20 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +Time +Square root of PM10 +0 +200 +400 +600 +800 +10 +20 +30 +Maximal intraday sqrt(PM10) +Frequency +Figure 4: Left subfigure: A random subset of 100 curves of 24-hourly PM10 +concentrations. Right subfigure: Histogram of the maximal values of intra- +day PM10 concentrations. +the FLopt method with the Unif method. Figure 5 shows the results for +different subsampling sizes r = 1000, 1500, 2000, 2500, 3000 with r0 = 1000. +We can find that the FLopt method always has smaller IMSE and RPSE +compared with the Unif method. Besides, all IMSE and RPSE gradually +decrease as the subsampling size r increases, showing the estimation consis- +tency of the subsampling methods and better approximation to the results +based on full data. +Supplementary Materials +All technical proofs are included in the online Supplementary Material. + +REFERENCES +−0.625 +−0.600 +−0.575 +−0.550 +1000 +1500 +2000 +2500 +3000 +r +IMSE +Method +FLopt +Unif +−1.40 +−1.35 +−1.30 +−1.25 +−1.20 +−1.15 +1000 +1500 +2000 +2500 +3000 +r +RPSE +Method +FLopt +Unif +Figure 5: IMSE and RPSE for different subsampling sizes r with r0 = 1000 +for 1000 repetitions. +Acknowledgements +This work was supported by the National Natural Science Foundation of +China (No. 11671060) and the Natural Science Foundation Project of CQ +CSTC (No. cstc2019jcyj-msxmX0267). +References +Ai M., Wang F., Yu J. and Zhang H. (2021a). Optimal subsampling for large-scale quantile +regression. Journal of Complexity 62, 101512. +Ai M., Yu J., Zhang H. and Wang H. (2021b). Optimal subsampling algorithms for big data +regression. Statistica Sinica 31, 749–772. + +REFERENCES +Atkinson A., Donev A. N. and Tobias R. D. (2007) Optimum Experimental Designs, with SAS. +Oxford University Press, New York. +Cardot H., Ferraty F. and Sarda P. (2003). Spline estimators for the functional linear model. +Statistica Sinica 13, 571–591. +Chen, K., Guo S., Lin Y. and Ying Z. (2010). Least absolute relative error estimation. Journal +of the American Statistical Association 105, 1104–1112. +Chen K., Lin Y., Wang Z. and Ying Z. (2016). Least product relative error estimation. Journal +Of Multivariate Analysis 144, 91–98. +Chen Y. and Liu H. (2021). A new relative error estimation for partially linear multiplicative +model. Communications in Statistics-Simulation and Computation, 1–19. +Chen Y., Liu H. and Ma J.(2022). Local least product relative error estimation for single-index +varying-coefficient multiplicative model with positive responses. Journal of Computational +and Applied Mathematics 415, 114478. +Claeskens G., Krivobokova T. and Opsomer J. D. (2009). Asymptotic properties of penalized +spline estimators. Biometrika 96, 529–544. +de Boor C. (2001). A Practical Guide to Splines. Springer-Verlag, Berlin. +Fan, R., Zhang S. and Wu Y. (2022). Penalized relative error estimation of functional multi- +plicative regression models with locally sparse properties. Journal of the Korean Statistical +Society 51, 666–691. + +REFERENCES +Fan Y., Liu Y. and Zhu L. (2021). Optimal subsampling for linear quantile regression models. +Canadian Journal of Statistics 49, 1039–1057. +He S., Yan X. (2022). Functional principal subspace sampling for large scale functional data +analysis. Electronic Journal of Statistics 16,2621–2682. +Liu H., You J. and Cao J. (2021). Functional L-optimality subsampling for massive data. arXiv +preprint arXiv:2104.03446. +Ma P., Mahoney M. W. and Yu B. (2015). A statistical perspective on algorithmic leveraging. +Journal of Machine Learning Research 16, 861–911. +Ma P., Zhang X., Xing X., Ma J. and Mahoney M. W. (2020). Asymptotic analysis of sam- +pling estimators for randomized numerical linear algebra algorithms. In Proceedings of the +Twenty Third International Conference on Artificial Intelligence and Statistics, 1026–1035. +Ming H., Liu H. and Yang H. (2022). Least product relative error estimation for identification in +multiplicative additive models. Journal of Computational and Applied Mathematics 404, +113886. +Ren M., Zhao S. and Wang M. (2022). Optimal subsampling for least absolute relative error +estimators with massive data. Journal of Complexity 74, 101694. +Ruppert D. (2002) Selecting the number of knots for penalized splines. Journal of Computational +and Graphical Statistics 11, 735–757. +Sang P. and Cao J. (2020). Functional single-index quantile regression models. Statistics and + +REFERENCES +Computing 30, 771–781. +Shao L., Song S. and Zhou Y. (2022). Optimal subsampling for large-sample quantile regression +with massive data. Canadian Journal of Statistics. https://doi.org/10.1002/cjs.11697 +Shao Y. and Wang L. (2021). Optimal subsampling for composite quantile regression model in +massive data. Statistical Papers 63, 1139–1161. +Wang H. (2019). More efficient estimation for logistic regression with optimal subsamples. Jour- +nal of Machine Learning Research 20, 1–59. +Wang H. and Ma Y. (2021). Optimal subsampling for quantile regression in big data. +Biometrika 108, 99–112. +Wang H., Zhu R. and Ma P. (2018). Optimal subsampling for large sample logistic regression. +Journal of the American Statistical Association 113, 829–844. +Wang T. and Zhang H. (2022). Optimal subsampling for multiplicative regression with massive +data. Statistica Neerlandica 76, 418–449. +Yao Y. and Wang H. (2019). Optimal subsampling for softmax regression. Statistical Papers 60, +585–599. +Yan Q., Li H. and Niu C. (2022). Optimal subsampling for functional quantile regression. +Statistical Papers. https://doi.org/10.1007/s00362-022-01367-z. +Yu J., Wang H., Ai M. and Zhang H. (2020). Optimal distributed subsampling for maximum +quasi-likelihood estimators with massive data. Journal of the American Statistical Associ- + +REFERENCES +ation 117, 265–276. +Yuan X., Li Y., Dong X. and Liu T. (2022). Optimal subsampling for composite quantile +regression in big data. Statistical Papers 63, 1649–1676. +Zhang T., Huang Y., Zhang Q., Ma S. and Ahmed S. E. (2019). Penalized relative error esti- +mation of a partially functional linear multiplicative model. In Ahmed S. E., Carvalho F. +and Puntanen S. (Eds.), Matrices, Statistics and Big Data, Springer, Cham. +Zhang T., Zhang Q. and Li N. (2016). Least absolute relative error estimation for functional +quadratic multiplicative model. Communications in Statistics-Theory and Methods 45, +5802–5817. +Qian Yan +College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China. +E-mail: qianyan@cqu.edu.cn +Hanyu Li +College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China. +E-mail: lihy.hy@gmail.com or hyli@cqu.edu.cn + diff --git a/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/load_file.txt b/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..47c4854256001bc94c9b40e092af5bd7d6eed417 --- /dev/null +++ b/SNAzT4oBgHgl3EQfJPtx/content/tmp_files/load_file.txt @@ -0,0 +1,903 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf,len=902 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='01076v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='ST] 3 Jan 2023 LEAST PRODUCT RELATIVE ERROR ESTIMATION FOR FUNCTIONAL MULTIPLICATIVE MODEL AND OPTIMAL SUBSAMPLING Qian Yan, Hanyu Li∗ Chongqing University Abstract: In this paper, we study the functional linear multiplicative model based on the least product relative error criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Under some regularization condi- tions, we establish the consistency and asymptotic normality of the estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Further, we investigate the optimal subsampling for this model with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Both the consistency and the asymptotic distribution of the subsampling estimator are first derived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Then, we obtain the optimal subsampling proba- bilities based on the A-optimality criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Moreover, the useful alternative subsampling probabilities without computing the inverse of the Hessian matrix are also proposed, which are easier to implement in practise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Finally, numeri- cal studies and real data analysis are done to evaluate the performance of the ∗Corresponding author: Hanyu Li, College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' E-mail: lihy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='hy@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='com or hyli@cqu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='cn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' proposed approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Key words and phrases: Asymptotic normality, functional multiplicative model, least product relative error, massive data, optimal subsampling 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Introduction In the era of big data, data can be collected and recorded on a dense sample of observations in time and space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' These observations are of a functional nature and typically take the form of curves and images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Functional data analysis has been shown to perform wonderfully well with these datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Functional regression models with scalar response have been extensively studied and the most popular one is the functional linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We consider a scalar-on-function linear multiplicative model y = exp �� 1 0 X(t)β(t)dt � ǫ, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1) where the covariate X(t) and slope β(t) are smooth and square integrable functions defined on [0, 1], y is the scalar response variable, and ǫ is the random error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Moreover, both y and ǫ are strictly positive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' By taking the logarithmic transformation, the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1) becomes the regular functional linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' However, in comparison, the multiplicative model is more useful and flexible to handle positive responses such as incomes, stock prices and survival times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' As we know, to estimate the slope, absolute errors are the most pop- ular choices for designing loss functions, such as the least squares (LS) and the least absolute deviation (LAD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' However, in practical applications, loss functions based on relative errors may be more effective and suitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' There are two types of relative errors, relative to the target value y and relative to the prediction of y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2010) summed the two rel- ative errors and proposed the least absolute relative error (LARE) crite- rion for scalar linear multiplicative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' However, the LARE criterion is non-smooth, which makes calculating it a little complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Later, by multiplying the two relative errors, Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2016) improved it and pre- sented the least product relative error (LPRE) criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The LPRE cri- terion is infinitely differentiable and strictly convex, resulting in a sim- ple and unique estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Moreover, they also proved that the LPRE estimation is more effective than the LARE, LAD, and LS estimations under some certain conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' As a result, this criterion has also been widely used in other scalar multiplicative models (Chen and Liu (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen, Liu and Ma (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ming, Liu and Yang (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For functional multiplicative models, to the best of our knowledge, there are only a few works and all of them focus on the LARE criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For example, Zhang, Zhang and Li (2016) extended the LARE criterion to the functional model for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' They developed the functional quadratic multiplicative model and derived the asymptotic properties of the estima- tor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Later, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2019) and Fan, Zhang and Wu (2022) considered the variable selection for partially and locally sparse functional linear mul- tiplicative models based on the LARE criterion, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' It seems that there is no study on the LPRE criterion conducted for functional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' To fill the gap, we propose the LPRE criterion for the functional linear mul- tiplicative model, and derive the consistency and asymptotic normality of the estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Considering that traditional techniques are no longer usable for massive data due to the limitation of computational resources, several researchers have devoted to developing efficient or optimal subsampling strategies for statistical models with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For example, for linear model, Ma, Mahoney and Yu (2015) studied the biases and variance of the algorithmic leveraging esti- mator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2020) further provided the asymptotic distributions of the RandNLA subsampling∗ estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For logistic regression, Wang, Zhu and Ma (2018) proposed an optimal subsampling method based on some optimality criteria (Atkinson, Donev and Tobias (2007)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Subsequently, Wang (2019) ∗The probabilities of this kind sampling have close relationship with leverage values, which are typically used to devise randomized algorithms in numerical linear algebra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' proposed a more efficient estimation method and Poisson subsampling to improve the estimation and computation efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Later, Yao and Wang (2019), Yu, Wang and Ai (2020) and Ai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021b) extended the opti- mal subsampling method to softmax regression, quasi-likelihood and gen- eralized linear models, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Furthermore, considering the effect of heavy-tailed errors or outliers in responses, some scholars have investi- gated more robust models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For example, Wang and Ma (2021), Ai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021a), Fan, Liu and Zhu (2021), and Shao, Song and Zhou (2022) em- ployed the optimal subsampling method in ordinary quantile regression, and Shao and Wang (2021) and Yuan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022) developed the subsam- pling for composite quantile regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Very recently, Ren, Zhao and Wang (2022) considered the optimal subsampling strategy based on the LARE criterion in linear multiplicative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' They derived the asymptotic dis- tribution of the subsampling estimator and proved that LARE outperforms LS and LAD under the optimal subsampling strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Wang and Zhang (2022) further extended the optimal subsampling to linear multiplicative model based on the LPRE criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For functional regression models, now only little work has been done in the area of subsampling (Liu, You and Cao (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' He and Yan (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yan, Li and Niu (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Specifically, He and Yan (2022) proposed a func- tional principal subspace sampling probability for functional linear regres- sion with scalar response, which eliminates the impact of eigenvalue in- side the functional principal subspace and properly weights the residuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Liu, You and Cao (2021) and Yan, Li and Niu (2022) extended the optimal subsampling method to functional generalized linear models and functional quantile regression with scalar response, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Inspired by the above works, we further study the optimal subsampling for functional linear multiplicative model based on the LPRE criterion, and first establish the consistency and asymptotic normality of the subsampling estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Then, the optimal subsampling probabilities are obtained by minimizing the asymptotic integrated mean squared error (IMSE) under the A-optimality criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In addition, a useful alternative minimization criterion is also proposed to further reduce the computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Section 2 introduces the functional linear multiplicative model based on the LPRE criterion and investigates the asymptotic properties of the estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In Section 3, we present the asymptotic properties of the subsampling estimator and the optimal subsampling probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The modified version of these probabil- ities is also considered in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Section 4 and Section 5 illustrate our methodology through numerical simulations and real data, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' LPRE estimation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 Estimation Suppose that {(xi(t), yi), i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n} are samples from the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1) with the independent and identical distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The functional LPRE estimator for the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1), says ˆβ(t), is established by arg inf β n � i=1 \uf8f1 \uf8f2 \uf8f3 ������ yi − exp �� 1 0 xi(t)β(t)dt � yi ������ × ������ yi − exp �� 1 0 xi(t)β(t)dt � exp �� 1 0 xi(t)β(t)dt � ������ \uf8fc \uf8fd \uf8fe , which is equivalent to arg inf β n � i=1 � yi exp � − � 1 0 xi(t)β(t)dt � + y−1 i exp �� 1 0 xi(t)β(t)dt � − 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We aim to estimate the slope function β(t) via a penalized spline method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Define K equispaced interior knots as 0 = t0 < t1 < .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' < tK < tK+1 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Let B(t) = (B1(t), B2(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , BK+p+1(t))T be the set of the normalized B-spline basis functions of degree p on each sub-interval [tj, tj+1], j = 0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , K and p−1 times continuously differentiable on [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The details of the B-spline functions can be found in de Boor (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Our functional LPRE estimator ˆβ(t) of β(t) is thus defined as ˆβ(t) = K+p+1 � j=1 ˆθjBj(t) = BT(t)ˆθfull, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 Estimation where ˆθfull minimizes the penalized functional LPRE loss function L(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' λ, K) = n � i=1 � yi exp � − � 1 0 xi(t)BT(t)θdt � + y−1 i exp �� 1 0 xi(t)BT(t)θdt � − 2 � + λ 2 � 1 0 �� B(q)(t) �T θ �2 dt, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2) where λ > 0 is the smoothing parameter, and B(q)(t) is the square in- tegrated q-th order derivative of all the B-splines functions for some in- teger q ≤ p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For convenience, we let Bi = � 1 0 xi(t)B(t)dt and Dq = � 1 0 B(q)(t){B(q)(t)}Tdt, the loss function (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2) thus can be rewritten as L(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' λ, K) = n � i=1 � ωi(θ) + ωi(θ)−1 − 2 � + λ 2θTDqθ, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3) where ωi(θ) = yi exp � −BT i θ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Of note, the model (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3) is infinitely dif- ferentiable and strictly convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The Newton-Raphson method will be used since there is no general closed-form solution to the functional LPRE esti- mator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' That is, the estimator ˆθfull can be obtained by iteratively applying the following formula until ˆθt+1 converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' ˆθt+1 =ˆθt − � n � i=1 � ωi(ˆθt) + ωi(ˆθt)−1� BiBT i + λDq �−1 × � n � i=1 � −ωi(ˆθt) + ωi(ˆθt)−1� Bi + λDqˆθt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Note that the computational complexity for calculating ˆθfull is about O(ζn(K + p + 1)2), where ζ is the number of iterations until convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Theoretical properties of ˆβ(t) As we can see, the computational cost is expensive when the full data size n is very large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' To deal with this issue, we will propose a subsampling algorithm to reduce computational cost in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Theoretical properties of ˆβ(t) We will show the consistency and asymptotic normality of ˆβ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For sim- plicity, the following notations are given firstly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For the function f(t) belonging to Banach space, ∥f∥m = ( � 1 0 |f(t)|mdt)1/m for 0 < m < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For the matrix A = (aij), ∥A∥∞ = maxij |aij|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In addition, define H = E{BBT(ǫ + ǫ−1)} + λ/nDq, G = E{BBT(ǫ − ǫ−1)2}, and ˆG = 1 n n � i=1 � −ωi(ˆθfull) + ωi(ˆθfull)−1�2 BiBT i , ˆH = 1 n n � i=1 � ωi(ˆθfull) + ωi(ˆθfull)−1� BiBT i + λ/nDq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4) Furthermore, we assume the following regularization conditions hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1): For the functional covariate X(t), assume that E(∥X∥8 8) < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2): Assume the unknown functional coefficient β(t) is sufficiently smooth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' That is, β(t) has a d′-th derivative β(d′)(t) such that | β(d′)(t) − β(d′)(s) |≤ C2 | t − s |v, t, s ∈ [0, 1], where the constant C2 > 0 and v ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In what follows, we set d = d′ + v ≥ p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Theoretical properties of ˆβ(t) (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3): E{(ǫ − ǫ−1) | X} = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4): E{(ǫ + ǫ−1)6 | X} < ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5): Assume the smoothing parameter λ satisfies λ = o(n1/2K1/2−2q) with q ≤ p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6): Assume the number of knots K = o(n1/2) and K/n1/(2d+1) → ∞ as n → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assumptions (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1) and (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2) are quite usual in the functional setting (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Cardot, Ferraty and Sarda (2003);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Claeskens, Krivobokova and Opsomer (2009)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assumption (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3) is an identifiability condition for the LPRE es- timation of β(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assumptions (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3) and (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4) ensure the consistency and asymptotic normality of the LPRE estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assumptions (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5) and (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6) are mainly used to obtain the asymptotic unbiasedness of the LPRE esti- mator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Now, we present the consistency and asymptotic normality of ˆβ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Under Assumptions (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1)–(H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6), for t ∈ [0, 1], as n → ∞, we have (1): (Consistency) There exists a LPRE estimator ˆβ(t) such that ∥ˆβ − β∥2 = OP(n−1/2K1/2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2): (Asymptotic normality) {B(t)TV fullB(t)}−1/2� n/K(ˆβ(t) − β(t)) → N(0, 1) in distribution, where V full = K−1H−1GH−1, which is consistently estimated by K−1 ˆH −1 ˆG ˆH −1 defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 Subsampling estimator and its theoretical properties We first introduce a general random subsampling algorithm for the func- tional linear multiplicative model, in which the subsamples are taken at random with replacement based on some sampling distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Given a larger K, we generate Bi = � 1 0 xi(t)B(t)dt and the new data is {(Bi, yi), i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assign the subsampling probabilities {πi}n i=1 to all data points and draw a random subsample of size r(≪ n) with replacement based on {πi}n i=1 from the new data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Denote the subsample as {(Bi, yi, Ri), i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n}, where Ri de- notes the total number of times that the i-th data point is selected from the full data and �n i=1 Ri = r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Given λ, minimize the following loss function to get the 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 Subsampling estimator and its theoretical properties estimate ˜θ based on the subsample, L∗(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' λ, K) = 1 r n � i=1 Ri πi � wi(˜θt) + wi(˜θt)−1 − 2 � + λ 2θTDqθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5) Due to the convexity of L∗(θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' λ, K), the Newton-Raphson method is adopted until ˜θt+1 and ˜θt are close enough, ˜θt+1 =˜θt − � n � i=1 Ri πi � ωi(˜θt) + ωi(˜θt)−1� BiBT i + λDq �−1 × � n � i=1 Ri πi � −ωi(˜θt) + ωi(˜θt)−1� Bi + λDq˜θt � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6) Finally, we can get the subsample estimator ˜β(t) = BT(t)˜θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The loss function (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5) is guaranteed to be unbiased in cases when we use an inverse probability weighted technique since the subsampling probabilities πi may depend on the full data Fn = {(xi(t), yi), i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n, t ∈ [0, 1]}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Below we establish the consistency and asymptotic normality of ˜β(t) to- wards ˆβ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' An extra condition is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7): Assume that max1≤i≤n r(nπi)−1 = OP(1) and r = o(K2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Assumption (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7) is often used in inverse probability weighted algorithms to restrict the weights such that the loss function is not exces- sively inflated by data points with extremely small subsampling probabili- ties (Ai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Liu, You and Cao (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yan, Li and Niu (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Optimal subsampling probabilities Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Under Assumptions (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1)–(H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7), for t ∈ [0, 1], as r, n → ∞, conditionally on Fn in probability, we have (1): (Consistency) There exists a subsampling estimator ˜β(t) such that ∥˜β − ˆβ∥2 = OP |Fn(r−1/2K1/2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2): (Asymptotic normality) � B(t)TV B(t) �−1/2 � r/K(˜β(t) − ˆβ(t)) → N(0, 1) in distribution, where V = 1 K ˆH −1V π ˆH −1, V π = 1 n2 n � i=1 1 πi � −ωi(ˆθfull) + ωi(ˆθfull)−1�2 BiBT i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Optimal subsampling probabilities To better approximate ˆβ(t), it is important to choose the proper subsam- pling probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A commonly used criterion is to minimize the asymp- totic IMSE of ˜β(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' By Theorem 2, we have the asymptotic IMSE of ˜β(t) as follows IMSE(˜β(t) − ˆβ(t)) = K r � 1 0 BT(t)V B(t)dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8) Note that V defined in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7) is the asymptotic variance-covariance ma- trix of � r/K(˜θ − ˆθfull) and the integral inequality � 1 0 BT(t)V B(t)dt ≤ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Optimal subsampling probabilities � 1 0 BT(t)V ′B(t)dt holds if and only if V ≤ V ′ holds in the L¨owner- ordering sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Thus, we focus on minimizing the asymptotic MSE of ˜θ and choose the subsampling probabilities such that tr(V ) is minimized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' This is called the A-optimality criterion in optimal experimental designs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Atkinson, Donev and Tobias (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Using this criterion, we are able to derive the optimal subsampling probabilities provided in the follow- ing theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Theorem 3 (A-optimality).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' If the subsampling probabilities πi, i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n, are chosen as πF Aopt i = |−yi exp(−BT i ˆθfull) + y−1 i exp(BT i ˆθfull)|∥ ˆH −1Bi∥2 �n i=1|−yi exp(−BT i ˆθfull) + y−1 i exp(BT i ˆθfull)|∥ ˆH −1Bi∥2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9) then the total asymptotic MSE of � r/K(˜θ − ˆθfull), tr(V ), attains its min- imum, and so does the asymptotic IMSE of ˜β(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' However, from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4), we have that ˆH requires the chosen of smoothing parameter λ, and the calculation of ∥ ˆH −1Bi∥2 costs O(n(K+p+1)2), which is expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' These weaknesses make these optimal subsampling probabil- ities not suitable for practical use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' So, it is necessary to find alternative probabilities without ˆH to reduce the computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Note that, as observed in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7), only V π involves πi in the asymptotic variance-covariance matrix V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Thus, from the L¨owner-ordering, we can 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Optimal subsampling probabilities only focus on V π and minimize its trace, which can be interpreted as min- imizing the asymptotic MSE of � r/K ˆH(˜θ − ˆθfull) due to its asymptotic unbiasedness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' This is called the L-optimality criterion in optimal experi- mental designs (Atkinson, Donev and Tobias (2007)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Therefore, to reduce the computing time, we consider the modified optimal criterion: minimizing tr(V π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Theorem 4 (L-optimality).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' If the subsampling probabilities πi, i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n, are chosen as πF Lopt i = |−yi exp(−BT i ˆθfull) + y−1 i exp(BT i ˆθfull)|∥Bi∥2 �n i=1|−yi exp(−BT i ˆθfull) + y−1 i exp(BT i ˆθfull)|∥Bi∥2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='10) then tr(V π) attains its minimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' From (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='10), it is seen that the functional L-optimal subsampling prob- abilities πF Lopt i requires O(n(K + p + 1)) flops to compute, which is much cheaper than computing πF Aopt i as K increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Consider that the subsampling probabilities (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='10) depend on ˆθfull, which is the full data estimation to be estimated, so an exact probability distribution is not applicable directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Next, we consider an approximate one and propose a two-step algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Step 1: Draw a small subsample of size r0 to obtain a pilot esti- mator ˜θpilot by running the general subsampling algorithm with the 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 Tuning parameters selection uniform sampling probabilities π0 i = 1/n and λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Replace ˆθfull with ˜θpilot in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='10) to derive the approximation of the optimal sub- sampling probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Step 2: Draw a subsample of size r by using the approximate optimal probabilities from Step 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Given λ, obtain the estimate ˘θ(λ) with the subsample by using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6), and the λ is determined by minimizing BIC(λ) discussed below based on the corresponding subsample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Once the optimal λ is determined, we can get the estimator ˘β(t) = BT(t)˘θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 Tuning parameters selection For the degree p and the order of derivation q, we empirically choose B- splines of degree 3 and a second-order penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The number of knots K is not a crucial parameter because smoothing is controlled by the roughness penalty parameter λ (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Ruppert (2002);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Cardot, Ferraty and Sarda (2003)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For the parameter λ, we choose the BIC criterion to determine it: BIC(λ) = log(RSS) + log(n) n df, where RSS = 1/n �n i=1{ωi(ˆθfull) + ωi(ˆθfull)−1 − 2}, and df denotes the effective degrees of freedom, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', the number of non-zero parameter esti- mates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' However, using full data to select the optimal λ is computationally expensive, we approximate it by BIC under the optimal subsample data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Simulation studies In this section, we aim to study the finite sample performance of the pro- posed methods by using synthetic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 LPRE performance In this experiment, we shall compare the performance of the functional least square (FLS), functional least absolute deviation (FLAD) and functional least product relative error (FLPRE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The FLS and FLAD estimates are defined as minimizing �n i=1[log(yi) − � 1 0 xi(t)β(t)dt]2 and �n i=1|log(yi) − � 1 0 xi(t)β(t)dt|, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The functional covariates in the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1) are identically and independently generated as: xi(t) = � aijBj(t), i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' , n, where Bj(t) are cubic B-spline basis functions that are sampled at 100 equally spaced points between 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We consider the following two different distributions for the basis coefficient A = (aij): C1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Multivariate normal distribution N(0, Σ), where Σij = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5|i−j|;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' C2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Multivariate t distribution with 5 degrees of freedom, t5(0, Σ/10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The slope function β(t) = 7t3 + 2 sin(4πt + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2) and the random errors, ǫi, are generated in four cases: R1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' log(ǫ) ∼ N(0, 1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 LPRE performance R2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' log(ǫ) ∼ U(−2, 2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' ǫ has the distribution with the density function f(x) = c exp(−x− x−1 − log(x) + 2)I(x > 0) and c is a normalization constant;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' R4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' ǫ ∼ U(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5, b) with b being chosen such that E(ǫ) = E(1/ǫ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In the specific simulation, we first take n = 100, 500, 1000 for train- ing, and then n = 300, 1500, 3000 for testing, and let the number of knots K = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Based on 500 replications, we use the root IMSE to evaluate the qualities of estimates and assess the performances of prediction on test data by the root predicted square error (RPSE), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' They are defined as follows: IMSE = 1 500 500 � k=1 �� 1 0 � ˆβ(k)(t) − β(t) �2 dt �1/2 , and RPSE = 1 500 500 � k=1 � 1 n n � i=1 �� 1 0 xi(t)β(t)dt − � 1 0 xi(t)ˆβ(k)(t)dt �2�1/2 , where ˜β(k)(t) is the estimator from the k-th run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The simulation results are presented in Tables 1 and 2, which show that FLPRE performs considerably better than FLS and LAD in all cases except the one C2-R1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In case C2-R1, FLPRE always outperforms FLAD, while the gap between LPRE and FLS gradually decreases as the sample 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance size increases, and LPRE slightly outperforms FLS when the sample size reaches 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In addition, the IMSE and RPSE of all estimators decrease as the sample size is increasing, which implies that the performance of all estimators becomes better when the sample size enlarges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance In this experiment, we first take n = 105 for training, and then m = 1000 for testing to compare the performance of the functional L-optimal sub- sampling (FLopt) method with the uniform subsampling (Unif) method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The simulated data distributions are the same as those in Subsection 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1, to which we add a case about the basis coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' C3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A mixture of two multivariate normal distributions 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5N(1, Σ)+ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5N(−1, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In addition, from Assumption (H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6), we let the number of knots K = ⌈n1/4⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For fair comparison, we use the same basis functions and the same smoothing parameters in all cases as those for full data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The root IMSE and RPSE of the subsampling estimators corresponding to various subsampling sizes of 2000,5000,8000,10000,15000 with r0 = 1000 are computed, where 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance Table 1: IMSE of each estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Dist Method R1 R2 R3 R4 n = 100/300 C1 FLPRE 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4958 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4588 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2258 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0766 FLS 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5023 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6294 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3061 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1707 FLAD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6797 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0704 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4343 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2370 C2 FLPRE 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7053 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5396 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9177 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4743 FLS 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4989 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7184 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9261 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5298 FLAD 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8300 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8908 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2250 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7391 n = 500/1500 C1 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7550 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6086 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5447 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8890 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9575 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7952 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7321 FLAD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1018 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3702 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9422 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7867 C2 FLPRE 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5923 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4627 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1942 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0242 FLS 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5522 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6664 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2809 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1278 FLAD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7679 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2696 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4302 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2366 n = 1000/3000 C1 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5508 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4930 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3879 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3255 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6213 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6600 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5286 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4834 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8532 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0904 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6803 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5346 C2 FLPRE 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2123 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9203 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7943 FLS 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2552 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3307 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0612 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9450 FLAD 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4475 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8175 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2096 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0333 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance Table 2: RPSE of each prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Dist Method R1 R2 R3 R4 n = 100/300 C1 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2488 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2416 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2022 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1770 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2516 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2722 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2170 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1940 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2828 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3486 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2394 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2054 C2 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1869 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1749 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1323 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1015 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1731 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1882 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1331 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1053 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1967 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2696 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1536 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1201 n = 500/1500 C1 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1240 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1151 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0983 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0872 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1455 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1569 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1288 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1181 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1814 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2276 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1537 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1274 C2 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1078 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0989 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0807 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0686 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1055 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1135 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0872 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0761 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1208 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1559 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0978 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0837 n = 1000/3000 C1 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0902 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0805 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0625 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0514 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1009 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1083 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0848 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0767 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1390 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1792 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1098 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0855 C2 FLPRE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0819 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0734 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0613 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0525 FLS 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0847 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0899 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0710 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0630 FLAD 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0985 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1243 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0815 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0691 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance the definitions of root IMSE and RPSE are as follows IMSE = 1 1000 1000 � k=1 �� 1 0 � ˜β(k)(t) − ˆβ(t) �2 dt �1/2 , RPSE = 1 1000 1000 � k=1 � 1 m m � i=1 �� 1 0 xi(t)˜β(k)(t)dt − � 1 0 xi(t)ˆβ(t)dt �2�1/2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='11) Based on 1000 replications, all results are shown in Figures 1 and 2 by logarithmic transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' From Figure 1, it is clear to see that the FLopt subsampling method always has smaller IMSE compared with the Unif subsampling method for all cases, which is in agreement with the theoretical results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' That is, the former can minimize the asymptotic IMSE of the subsampling estimator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In particular, the FLopt method performs much better when the errors obey case R1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' From Figure 2, same as the case on IMSE, we can see that the RPSE of the FLopt subsampling estimator is always better than that of the Unif subsampling estimator for all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Furthermore, Figures 1 and 2 also show that the FLopt method depends on both types of random errors and covariates, and the effect of errors is greater than that of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Besides, as expected, the estimation and prediction efficiency of the subsampling estimators is getting better as the subsample size increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' To evaluate the computational efficiency of the subsampling methods, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 4000 8000 12000 r IMSE Method FLopt Unif C1−R1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 4000 8000 12000 r IMSE Method FLopt Unif C2−R1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 4000 8000 12000 r IMSE Method FLopt Unif C3−R1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 4000 8000 12000 r IMSE Method FLopt Unif C1−R2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 4000 8000 12000 r IMSE Method FLopt Unif C2−R2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 4000 8000 12000 r IMSE Method FLopt Unif C3−R2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 4000 8000 12000 r IMSE Method FLopt Unif C1−R3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 4000 8000 12000 r IMSE Method FLopt Unif C2−R3 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 4000 8000 12000 r IMSE Method FLopt Unif C3−R3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 4000 8000 12000 r IMSE Method FLopt Unif C1−R4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 4000 8000 12000 r IMSE Method FLopt Unif C2−R4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 4000 8000 12000 r IMSE Method FLopt Unif C3−R4 Figure 1: IMSE for different subsampling sizes r and fixed first step sub- sampling size r0 = 1000 with different distributions when n = 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 4000 8000 12000 r RPSE Method FLopt Unif C1−R1 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 4000 8000 12000 r RPSE Method FLopt Unif C2−R1 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 4000 8000 12000 r RPSE Method FLopt Unif C3−R1 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 4000 8000 12000 r RPSE Method FLopt Unif C1−R2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 4000 8000 12000 r RPSE Method FLopt Unif C2−R2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 4000 8000 12000 r RPSE Method FLopt Unif C3−R2 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 4000 8000 12000 r RPSE Method FLopt Unif C1−R3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 4000 8000 12000 r RPSE Method FLopt Unif C2−R3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='3 4000 8000 12000 r RPSE Method FLopt Unif C3−R3 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 4000 8000 12000 r RPSE Method FLopt Unif C1−R4 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 4000 8000 12000 r RPSE Method FLopt Unif C2−R4 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 4000 8000 12000 r RPSE Method FLopt Unif C3−R4 Figure 2: RPSE for different subsampling sizes r and fixed first step sub- sampling size r0 = 1000 with different distributions when n = 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Subsampling performance Table 3: CPU seconds for different subsampling sizes r with n = 106, r0 = 200, K = 50 and a fixed λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The times are the mean times calculated from 100 implementations of each method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Method r 1000 2000 3000 4000 5000 FLopt 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6036 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6230 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6298 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6393 Unif 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0142 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0355 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0455 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0554 Full data CPU seconds: 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8155 we record the computing time of each method used in the case C1-R1 on a PC with an Intel I5 processor and 8GB memory using R, where the time required to generate the data is not included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We set n = 106, r0 = 200 and enlarge the number of knots for spline function to K = 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Each subsampling strategy is evaluated 100 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The results on different r with a fixed λ for the FLopt and Unif subsampling methods are given in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' It is clear that subsampling can significantly improve computational efficiency compared with full data, and the FLopt method is more expensive than the Unif method as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Real data analysis 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1 Tecator data Tecator data is available in fda.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='usc package, which has 215 meat samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' For each sample, the data consists of 100 channel spectrum of absorbance and the contents of fat, water and protein which are measured in percent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The 100 channel spectrum measured over the wavelength range 850-1050nm provides a dense measurement spaced 2nm apart that can be considered functional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Figure 3 shows 50 randomly selected curves of spectrum of absorbance and the histogram of the content of protein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In this experiment, we shall study protein content by our proposed FLPRE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 2 3 4 850 900 950 1000 1050 Wavelength absorbance 0 10 20 30 40 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5 Content of Protein Frequency Figure 3: Left subfigure: A random subset of 50 spectrometric curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Right subfigure: Histogram of the content of protein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Beijing multi-site air-quality data Table 4: MAPE and MPPE for Tecator data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Criterion FLS FLAD FLPRE MAPE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5836 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7452 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5420 MPPE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0745 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0739 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0727 We employ the first 160 observations to fit the model, and then apply the remaining samples to evaluate the prediction efficiency by the mean of absolute prediction errors (MAPE) and product relative prediction errors (MPPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The two mean criteria are measured by: MAPE = 1 55 215 � i=161 |yi − ˆyi|;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' MPPE = 1 55 215 � i=161 (yi − ˆyi)2/(yiˆyi), where ˆyi = exp( � 1 0 xi(t)ˆβ(t)dt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' All results are presented in Table 4 which illustrates that the proposed FLPRE outperforms FLS and FLAD for pre- dicting the protein content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Beijing multi-site air-quality data This data set is available in https://archive-beta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='ics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='uci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='edu/ml/datasets/beijing+multi+site+air+quality+data, and consists of hourly air pollutants data from 12 nationally controlled air- quality monitoring sites in Beijing from March 1, 2013 to February 28, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Our primary interest here is to predict the maximum of daily PM10 con- 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Beijing multi-site air-quality data Table 5: MAPE and MPPE for the air-quality data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Criterion FLS FLAD FLPRE MAPE 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='9902 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='7580 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6727 MPPE 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5031 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='5223 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4970 centrations (µg/m3) using the PM10 trajectory (24 hours) of the last day.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We delete all missing values and obtain a sample of 15573 days’ complete records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We take the top 80% of the sample as the training set and the rest as the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The raw observations after the square-root transformation are first transformed into functional data using 15 Fourier basis functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' This transformation can be implemented with the Data2fd function in the fda package, suggested in Sang and Cao (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A random subset of 100 curves of 24-hourly PM10 concentrations is presented in the left panel of Figure 4, where the time scale has been transformed to [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' The right panel of Figure 4 depicts the histogram of the maximal values of intraday PM10 concentrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We assess the performances of prediction by the MAPE and MPPE criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Table 5 illustrates that the proposed FLPRE outperforms FLS and FLAD for predicting the PM10 concentrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Further, we calculate the IMSE and RPSE using (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='11) and compare 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 Beijing multi-site air-quality data 10 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='0 Time Square root of PM10 0 200 400 600 800 10 20 30 Maximal intraday sqrt(PM10) Frequency Figure 4: Left subfigure: A random subset of 100 curves of 24-hourly PM10 concentrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Right subfigure: Histogram of the maximal values of intra- day PM10 concentrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' the FLopt method with the Unif method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Figure 5 shows the results for different subsampling sizes r = 1000, 1500, 2000, 2500, 3000 with r0 = 1000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' We can find that the FLopt method always has smaller IMSE and RPSE compared with the Unif method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Besides, all IMSE and RPSE gradually decrease as the subsampling size r increases, showing the estimation consis- tency of the subsampling methods and better approximation to the results based on full data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Supplementary Materials All technical proofs are included in the online Supplementary Material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' REFERENCES −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='625 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='600 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='575 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='550 1000 1500 2000 2500 3000 r IMSE Method FLopt Unif −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='40 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='35 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='30 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='25 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='20 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='15 1000 1500 2000 2500 3000 r RPSE Method FLopt Unif Figure 5: IMSE and RPSE for different subsampling sizes r with r0 = 1000 for 1000 repetitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Acknowledgements This work was supported by the National Natural Science Foundation of China (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' 11671060) and the Natural Science Foundation Project of CQ CSTC (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' cstc2019jcyj-msxmX0267).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' References Ai M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Wang F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Yu J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Zhang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for large-scale quantile regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Complexity 62, 101512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ai M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Yu J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling algorithms for big data regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistica Sinica 31, 749–772.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' REFERENCES Atkinson A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Donev A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Tobias R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2007) Optimum Experimental Designs, with SAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Oxford University Press, New York.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Cardot H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Ferraty F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Sarda P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Spline estimators for the functional linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistica Sinica 13, 571–591.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Guo S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Lin Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ying Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Least absolute relative error estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of the American Statistical Association 105, 1104–1112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Lin Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Wang Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ying Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Least product relative error estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal Of Multivariate Analysis 144, 91–98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Liu H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A new relative error estimation for partially linear multiplicative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Communications in Statistics-Simulation and Computation, 1–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Chen Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Liu H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ma J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='(2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Local least product relative error estimation for single-index varying-coefficient multiplicative model with positive responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Computational and Applied Mathematics 415, 114478.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Claeskens G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Krivobokova T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Opsomer J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Asymptotic properties of penalized spline estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Biometrika 96, 529–544.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' de Boor C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A Practical Guide to Splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Springer-Verlag, Berlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Fan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhang S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Wu Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Penalized relative error estimation of functional multi- plicative regression models with locally sparse properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of the Korean Statistical Society 51, 666–691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' REFERENCES Fan Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Liu Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Zhu L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for linear quantile regression models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Canadian Journal of Statistics 49, 1039–1057.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' He S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Yan X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Functional principal subspace sampling for large scale functional data analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Electronic Journal of Statistics 16,2621–2682.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Liu H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', You J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Cao J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Functional L-optimality subsampling for massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='03446.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ma P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Mahoney M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Yu B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' A statistical perspective on algorithmic leveraging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Machine Learning Research 16, 861–911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ma P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhang X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Xing X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Ma J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Mahoney M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Asymptotic analysis of sam- pling estimators for randomized numerical linear algebra algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 1026–1035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ming H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Liu H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Yang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Least product relative error estimation for identification in multiplicative additive models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Computational and Applied Mathematics 404, 113886.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ren M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhao S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Wang M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for least absolute relative error estimators with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Complexity 74, 101694.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Ruppert D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2002) Selecting the number of knots for penalized splines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of Computational and Graphical Statistics 11, 735–757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Sang P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Cao J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Functional single-index quantile regression models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistics and REFERENCES Computing 30, 771–781.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Shao L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Song S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Zhou Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for large-sample quantile regression with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Canadian Journal of Statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1002/cjs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='11697 Shao Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Wang L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for composite quantile regression model in massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistical Papers 63, 1139–1161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' More efficient estimation for logistic regression with optimal subsamples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Jour- nal of Machine Learning Research 20, 1–59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ma Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for quantile regression in big data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Biometrika 108, 99–112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhu R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ma P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for large sample logistic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of the American Statistical Association 113, 829–844.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Wang T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Zhang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for multiplicative regression with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistica Neerlandica 76, 418–449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yao Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for softmax regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistical Papers 60, 585–599.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yan Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Li H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Niu C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for functional quantile regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistical Papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='1007/s00362-022-01367-z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yu J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Wang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Ai M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Zhang H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal distributed subsampling for maximum quasi-likelihood estimators with massive data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Journal of the American Statistical Associ- REFERENCES ation 117, 265–276.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Yuan X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Li Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Dong X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Liu T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Optimal subsampling for composite quantile regression in big data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Statistical Papers 63, 1649–1676.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Zhang T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Huang Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhang Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Ma S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Ahmed S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Penalized relative error esti- mation of a partially functional linear multiplicative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' In Ahmed S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Carvalho F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Puntanen S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' ), Matrices, Statistics and Big Data, Springer, Cham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Zhang T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=', Zhang Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' and Li N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Least absolute relative error estimation for functional quadratic multiplicative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Communications in Statistics-Theory and Methods 45, 5802–5817.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' Qian Yan College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' E-mail: qianyan@cqu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='cn Hanyu Li College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content=' E-mail: lihy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='hy@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='com or hyli@cqu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} +page_content='cn' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNAzT4oBgHgl3EQfJPtx/content/2301.01076v1.pdf'} diff --git a/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/2301.03608v1.pdf.txt b/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/2301.03608v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..de2322d5aa420fe5ab03f57b2217f1ab8be3717d --- /dev/null +++ b/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/2301.03608v1.pdf.txt @@ -0,0 +1,2349 @@ +Draft version January 11, 2023 +Typeset using LATEX twocolumn style in AASTeX63 +The NANOGrav 12.5-year Data Set: Bayesian Limits on Gravitational Waves from Individual +Supermassive Black Hole Binaries +Zaven Arzoumanian,1 Paul T. Baker +,2 Laura Blecha,3 Harsha Blumer +,4, 5 Adam Brazier,6 +Paul R. Brook +,4, 5 Sarah Burke-Spolaor +,4, 5 Bence B´ecsy +,7 J. Andrew Casey-Clyde +,8 +Maria Charisi +,9 Shami Chatterjee +,6 Siyuan Chen +,10 James M. Cordes +,6 Neil J. Cornish +,11 +Fronefield Crawford +,12 H. Thankful Cromartie +,13 Megan E. DeCesar +,14 Paul B. Demorest +,15 +Timothy Dolch +,16, 17 Brendan Drachler,18, 19 Justin A. Ellis,20 E. C. Ferrara +,21, 22, 23 William Fiore +,4, 5 +Emmanuel Fonseca +,4, 5 Gabriel E. Freedman +,24 Nathan Garver-Daniels +,4, 5 Peter A. Gentile +,4, 5 +Joseph Glaser +,4, 5 Deborah C. Good +,25 Kayhan G¨ultekin +,26 Jeffrey S. Hazboun +,7 Ross J. Jennings +,6 +Aaron D. Johnson +,24, 27 Megan L. Jones +,24 Andrew R. Kaiser +,4, 5 David L. Kaplan +,24 +Luke Zoltan Kelley +,28, 29 Joey Shapiro Key +,30 Nima Laal +,7 Michael T. Lam +,18, 19 William G Lamb +,9 +T. Joseph W. Lazio,31, 27 Natalia Lewandowska +,32 Tingting Liu +,24 Duncan R. Lorimer +,4, 5 Jing Luo,33, ∗ +Ryan S. Lynch +,34 Dustin R. Madison +,4, 5 Alexander McEwen +,24 Maura A. McLaughlin +,4, 5 +Chiara M. F. Mingarelli +,35, 8 Cherry Ng +,36 David J. Nice +,37 Stella Koch Ocker +,6 Ken D. Olum +,38 +Timothy T. Pennucci +,39 Nihan S. Pol +,9 Scott M. Ransom +,40 Paul S. Ray +,41 Joseph D. Romano,42 +Brent J. Shapiro-Albert +,43, 4, 5 Xavier Siemens +,7, 24 Joseph Simon +,31, 27 Magdalena Siwek +,44 +Ren´ee Spiewak +,45 Ingrid H. Stairs +,25 Daniel R. Stinebring +,46 Kevin Stovall +,15 +Joseph K. Swiggum +,37, † Jessica Sydnor +,4, 5 Stephen R. Taylor +,9 Jacob E. Turner +,4, 5 +Michele Vallisneri +,31, 27 Sarah J. Vigeland +,24 Haley M. Wahl +,4, 5 Gregory Walsh +,4, 5 +Caitlin A. Witt§ +,29, 47, 4, 5 Olivia Young +,18, 19 +The NANOGrav Collaboration +1X-Ray Astrophysics Laboratory, NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA +2Department of Physics and Astronomy, Widener University, One University Place, Chester, PA 19013, USA +3Department of Physics, University of Florida, 2001 Museum Rd., Gainesville, FL 32611, USA +4Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506, USA +5Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, +USA +6Cornell Center for Astrophysics and Planetary Science and Department of Astronomy, Cornell University, Ithaca, NY 14853, USA +7Department of Physics, Oregon State University, Corvallis, OR 97331, USA +8Department of Physics, University of Connecticut, 196 Auditorium Road, U-3046, Storrs, CT 06269-3046, USA +9Department of Physics and Astronomy, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA +10Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, 100871 China +11Department of Physics, Montana State University, Bozeman, MT 59717, USA +12Department of Physics and Astronomy, Franklin & Marshall College, P.O. Box 3003, Lancaster, PA 17604, USA +13University of Virginia, Department of Astronomy, P.O. Box 400325, Charlottesville, VA 22904, USA +14George Mason University, Fairfax, VA 22030, resident at the Naval Research Laboratory, Washington, DC 20375, USA +15National Radio Astronomy Observatory, 1003 Lopezville Rd., Socorro, NM 87801, USA +16Department of Physics, Hillsdale College, 33 E. College Street, Hillsdale, MI 49242, USA +17Eureka Scientific, 2452 Delmer Street, Suite 100, Oakland, CA 94602-3017, USA +18School of Physics and Astronomy, Rochester Institute of Technology, Rochester, NY 14623, USA +19Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, Rochester, NY 14623, USA +20Infinia ML, 202 Rigsbee Avenue, Durham NC, 27701 +21Department of Astronomy, University of Maryland, College Park, MD, 20742, USA +22Center for Exploration and Space Studies (CRESST), NASA/GSFC, Greenbelt, MD 20771, USA +23NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA +Corresponding author: Caitlin A. Witt§ +caitlin.witt@nanograv.org +arXiv:2301.03608v1 [astro-ph.GA] 9 Jan 2023 + +ID2 +The NANOGrav Collaboration +24Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin-Milwaukee, +P.O. Box 413, Milwaukee, WI 53201, USA +25Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada +26University of Michigan, Dept. of Astronomy, 1085 S. University Ave., Ann Arbor, MI, 48104, USA +27Theoretical AstroPhysics Including Relativity (TAPIR), MC 350-17, California Institute of Technology, Pasadena, California 91125, +USA +28Department of Astronomy, University of California at Berkeley, Berkeley, CA 94720, USA +29Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Northwestern University, Evanston, IL 60208 +30University of Washington Bothell, 18115 Campus Way NE, Bothell, WA 98011, USA +31Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA +32Department of Physics, State University of New York at Oswego, Oswego, NY, 13126, USA +33Department of Astronomy & Astrophysics, University of Toronto, 50 Saint George Street, Toronto, ON M5S 3H4, Canada +34Green Bank Observatory, P.O. Box 2, Green Bank, WV 24944, USA +35Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, New York, 10010, USA +36Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4, Canada +37Department of Physics, Lafayette College, Easton, PA 18042, USA +38Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155, USA +39Institute of Physics, E¨otv¨os Lor´and University, P´azm´any P. s. 1/A, 1117 Budapest, Hungary +40National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA +41Space Science Division, Naval Research Laboratory, Washington, DC 20375-5352, USA +42Department of Physics and Astronomy, Texas Tech University, Lubbock, TX 79409-1051, USA +43Giant Army, 915A 17th Ave, Seattle WA 98122 +44Center for Astrophysics, Harvard University, Cambridge, MA 02138, USA +45Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UK +46Department of Physics and Astronomy, Oberlin College, Oberlin, OH 44074, USA +47Adler Planetarium, 1300 S. DuSable Lake Shore Dr., Chicago, IL 60605, USA +ABSTRACT +Pulsar timing array collaborations, such as the North American Nanohertz Observatory for Gravita- +tional Waves (NANOGrav), are seeking nanohertz gravitational waves emitted by supermassive black +hole binaries formed in the aftermath of galaxy mergers. We have searched for continuous waves from +individual circular supermassive black hole binaries using NANOGrav’s recent 12.5-year data set. We +created new methods to accurately model the uncertainties on pulsar distances in our analysis, and we +implemented new techniques to account for a common red noise process in pulsar timing array data +sets while searching for deterministic gravitational wave signals, including continuous waves. As we +found no evidence for continuous waves in our data, we placed 95% upper limits on the strain ampli- +tude of continuous waves emitted by these sources. At our most sensitive frequency of 7.65 nanohertz, +we placed a sky-averaged limit of h0 < (6.82 ± 0.35) × 10−15, and h0 < (2.66 ± 0.15) × 10−15 in our +most sensitive sky location. Finally, we placed a multi-messenger limit of M < (1.41 ± 0.02) × 109M⊙ +on the chirp mass of the supermassive black hole binary candidate 3C 66B. +Keywords: Gravitational waves – Methods: data analysis – Pulsars: general +1. INTRODUCTION +Supermassive black hole binaries (SMBHBs) are ex- +pected to form in the aftermath of galaxy mergers, when +the two constituent supermassive black holes eventually +become gravitationally bound (Begelman et al. 1980). +If they are able to reach an advanced stage of evolu- +tion, with sub-parsec orbital separations, these binaries +∗ Author is deceased +† NANOGrav Physics Frontiers Center Postdoctoral Fellow +are predicted to be among the brightest sources of low- +frequency gravitational waves (GWs) in the universe, +emitting at frequencies of ∼ 10−9 − 10−7 Hz. The GWs +emitted by discrete SMBHBs are known as continuous +waves (CWs) due to their minimal frequency evolution, +while the dominant source of nanohertz GWs is expected +to be the stochastic background of GWs (GWB) that +has contributions from the entire cosmic population of +SMBHBs and potentially other sources (Sesana et al. +2004; Burke-Spolaor et al. 2019). + +NANOGrav 12.5-year Continuous Wave Limits +3 +By carefully monitoring the radio pulses from stable +millisecond pulsars (MSPs) over many years, pulsar tim- +ing arrays (PTAs) should be able to detect correlated +fluctuations in the pulse times of arrival due to the in- +fluence of low-frequency GWs (Detweiler 1979; Foster +& Backer 1990). +There are multiple PTA collabora- +tions currently operating; among them, the North Amer- +ican Nanohertz Observatory for Gravitational Waves +(NANOGrav; McLaughlin 2013), the Parkes Pulsar +Timing Array (PPTA; Hobbs 2013a,b), and the Eu- +ropean Pulsar Timing Array (EPTA; Desvignes et al. +2016) have each produced multiple pulsar timing data +sets with which to search for GWs. These groups, along +with other pulsar timing projects, combine efforts as a +consortium known as the International Pulsar Timing +Array (IPTA; Verbiest et al. 2016a). +These PTA data sets have enabled numerous searches +for GWs from SMBHBs, as well as primordial GWs (e.g. +Benetti et al. 2022), cosmic strings (e.g., Arzoumanian +et al. 2018), and cosmological phase transitions (Arzou- +manian et al. 2021a; Xue et al. 2021). +Modeling has +suggested that the GWB signal from SMBHBs will be +detected first (Rosado et al. 2015). While PTAs have not +yet detected a GWB, they have placed steadily improv- +ing limits on such a signal (van Haasteren et al. 2011; +Demorest et al. 2013; Shannon et al. 2013; Lentati et al. +2015; Shannon et al. 2015; Verbiest et al. 2016b; Ar- +zoumanian et al. 2016, 2018) until around 2015, when +published limits began to stabilize at a characteristic +strain value of a few times 10−15. In the NANOGrav +12.5-year data set (Alam et al. 2021a), PPTA second +data release (Kerr et al. 2020), EPTA data release 2 +(Chen et al. 2021), and IPTA data release 2 (Perera et al. +2019), not only does the upper limit no longer decrease, +but a common red noise (CRN) process with charac- +teristics similar to those predicted for a SMBHB-origin +GWB was detected to high significance, albeit without +evidence for the specific spatial correlation assumed for +the GWB (Arzoumanian et al. 2020a; Goncharov et al. +2021; Antoniadis et al. 2022; Falxa et al. in prep). +While this common red-noise process is heartening +for future GWB searches, it has sparked new challenges +for CW searches, as the background takes the form of +a noise process, which (like any noise process under- +lying a signal) will work to disrupt the sensitivity of +CW searches. Over the past decades, all-sky and all- +frequency CW searches have improved their sensitivity +by several orders of magnitude in GW strain (e.g., Yard- +ley et al. 2010; Arzoumanian et al. 2014; Zhu et al. 2014; +Babak et al. 2016; Aggarwal et al. 2019), allowing the +sensitivity horizon of PTAs to expand by several orders +of magnitude. This has allowed the PTA horizon to in- +clude increasing numbers of specific systems of interest +(e.g., Lommen & Backer 2001; Jenet et al. 2004; Aggar- +wal et al. 2019; Charisi et al. 2022). PTAs are likely to +reach the sensitivities required to detect a CW soon af- +ter the GWB is detected (Rosado et al. 2015; Mingarelli +et al. 2017; Kelley et al. 2018; B´ecsy et al. 2022b), and we +are working to revise and improve CW search method- +ologies as CW upper limits decrease. +In this paper, we present the results of an all-sky +search for CWs from individual circular SMBHBs in the +NANOGrav 12.5-year data set. This work is an exten- +sion of the searches performed in previous NANOGrav +datasets (presented in Arzoumanian et al. 2014 and Ag- +garwal et al. 2019 for the 5- and 11-year data sets, re- +spectively), and uses analogous techniques to the search +for CWs in the IPTA data release 2 (Falxa et al. in prep). +Our new search benefited from the use of the more sen- +sitive 12.5-year data set. +Most critically, however, in +this work we needed to account for the existence of an +emerging common-noise signal in this data set, and un- +derstand the impact that this signal may have on CW +sensitivity. +This paper is organized as follows. In section 2, we +present an overview of the data used for our analysis, +details of new pulsar distance modeling methods created +for CW searches, and a description of the GW signals +and analysis methods used throughout this paper. In +section 3, we present the results of our GW searches, +and in section 4, interpret their broader astrophysical +context. For the busy reader, our main results can be +summarized as follows: +• For accurate low-frequency CW searches, the CRN +that has been seen in GWB searches must be ac- +counted for in our signal modeling; otherwise, our +detection metrics may report a false positive re- +sult. +• Once the CRN was taken into account, we found +that no CWs were detected in the 12.5-year data +set. +• With this knowledge, we placed stringent lim- +its on the CW amplitude as a function of GW +frequency. +For the most sensitive frequency of +7.65 × 10−9 Hz, we reach strain 95% upper lim- +its of (6.82 ± 0.35) × 10−15, and we also placed +limits on the CW amplitude at this frequency as +a function of sky location. +• While our all-sky sensitivity has improved with +each subsequent NANOGrav data set, we found +herein that for a portion of the sky, the upper limit +at the most sensitive frequency of 7.65×10−9 Hz is + +4 +The NANOGrav Collaboration +comparable to or worse than in previous data sets. +Through extensive simulations, we linked this ef- +fect to the newly-detectable CRN process in the +12.5-year data set. +• We used these limits to make inferences about the +local population of SMBHBs, and limited the dis- +tance to an SMBHB emitting at 7.65×10−9 Hz to +be greater than 86.65 Mpc for a 109M⊙ binary in +the most sensitive sky location. +• We used multi-messenger techniques to update +limits on the chirp mass of the SMBHB candidate +3C 66B to be less than (1.41 ± 0.02) × 109M⊙ and +placed new limits on the chirp mass of SMBHB +candidate HS 1630+2355 to be less than (1.28 ± +0.03) × 1010M⊙. +In section 5, we discuss the implications of these results. +In section 6, we summarize our conclusions. +2. METHODS +2.1. The 12.5-year Data Set +We analyzed the NANOGrav 12.5-year data set, orig- +inally published as Alam et al. (2021a,b), which consists +of times-of-arrival (TOAs) and timing models from 47 +pulsars. Two versions of the data set were created from +the original observations, taken between 2004 and 2017, +using independent analyses. Here, we make use of the +narrowband version of the data set (Alam et al. 2021a). +This adds 2 pulsars and 1.5 years of observations over +the previous 11-year data set. For GW analyses, we re- +quire the pulsars to have a timing baseline of at least +3 years; therefore, we use only 45 of the 47 pulsars in- +cluded in the full data set. However, the 11-year data set +included only 34 pulsars that could be used in GW anal- +yses, so this addition, which includes a factor of ∼ 1.5 +increase in the number of pulse TOAs, represents a sig- +nificant addition of data, increasing our sensitivity. It +is important to note that the 12.5-year data set is not +merely an addition of TOAs to previous releases, but +a full re-analysis with an updated pipeline, described in +detail in Alam et al. (2021a). Thus, our search also ben- +efited from improved timing precision for pulsars shared +with previous data sets. +2.2. Signal Model +As in previous NANOGrav searches for continuous +gravitational waves, we will describe the effect of an +individual SMBHB on a pulsar’s TOAs and its timing +model. A starting point is the residuals, δt, obtained +after subtracting a basic timing model (which excludes +noise and GW parameters) from the measured arrival +times. +While the methods remain nearly identical to +previous iterations, slight alterations have been made +to improve consistency with other work in the field, to +reflect more recent data, and to include the CRN in +the CW search. As such, we will lay out the methods +with particular focus on any instances that have changed +since NANOGrav’s most recent CW search (Aggarwal +et al. 2019). Note that throughout this paper, we use +units where G = c = 1, cosmology calculations assume +H0 = 69.32, and the GW derivations assume General +Relativity. +The pulsar residuals can be separated into multiple +components as +δt = Mϵ + nwhite + nred + s, +(1) +where M is the design matrix, which describes the lin- +earized timing model, and ϵ is a vector of the timing +model parameter offsets. This term allows the timing +model parameters of each pulsar to be adjusted in ac- +cordance with the presence of any additional signals. +The variables nwhite and nred refer to vectors describing +the pulsar white and red noise, respectively, and s is a +vector of GW-induced signal present in the residuals. +2.2.1. CW Signal +For a GW source located at right ascension α and +declination δ, we define the polar angle θ = π/2 − δ +and azimuthal angle φ = α. The strain of GWs emit- +ted from such a source can be written in terms of two +polarizations as +hab(t, ˆΩ) = e+ +ab(ˆΩ)h+(t, ˆΩ) + e× +ab(ˆΩ)h×(t, ˆΩ), +(2) +where ˆΩ is a unit vector pointing from the the GW +source to the Earth (along the direction of propagation), +h+,× are the polarization amplitudes, and e+,× +ab +are the +polarization tensors. These can be written in the solar +system barycenter frame as +e+ +ab = ˆpaˆpb − ˆqaˆqb +e× +ab = ˆpaˆqb + ˆqaˆpb, +(3) +and are constructed from basis vectors. +ˆn =(sin θ cos φ, sin θ sin φ, cos θ) = −ˆΩ +ˆp =(cos ψ cos θ cos φ − sin ψ sin φ, +cos ψ cos θ sin φ + sin ψ cos φ, − cos ψ sin θ) +ˆq =(sin ψ cos θ cos φ + cos ψ sin φ, +sin ψ cos θ sin φ − cos ψ cos φ, − sin ψ sin θ). +(4) +Note that this basis is different than that used in Ag- +garwal et al. (2019) to maintain better consistency with + +NANOGrav 12.5-year Continuous Wave Limits +5 +previous references and the standards used by other GW +detectors. Differences can be reduced to a rotation of the +frame by an angle equivalent to the GW polarization an- +gle ψ. These polarization tensors are used to construct +the antenna pattern function F +,×(ˆΩ), which describes +the response of the pulsar (at unit vector ˆu) to the GW +source, as in Taylor et al. (2016), where +F A(ˆΩ) ≡ 1 +2 +ˆuaˆub +1 + ˆΩ · ˆu +eA +ab(ˆΩ). +(5) +Now, we can write the signal s induced by the GW as +seen in the pulsar’s residuals as +s(t, ˆΩ) = F +(ˆΩ)∆s+(t) + F ×(ˆΩ)∆s×(t), +(6) +where ∆s+,× is the difference between the signal induced +at the Earth (the “Earth term”) and at the pulsar (the +“pulsar term”). This can be written as +∆s+,×(t) = s+,× (tp) − s+,×(t), +(7) +where t and tp represent the time when the GW passes +the Earth and the pulsar, respectively. These times can +be related geometrically by +tp = t − L(1 + ˆΩ · ˆu), +(8) +where ˆu is the line of sight vector to the pulsar and L is +the distance to the pulsar (see section 2.3.4 for further +discussion of this value). +For a circular binary at zeroth post-Newtonian (0-PN) +order, s+,× can be written as +s+(t) = +M5/3 +dLω(t)1/3 +� +− sin 2Φ(t) +� +1 + cos2 ι +�� +, +s×(t) = +M5/3 +dLω(t)1/3 [2 cos 2Φ(t) cos ι] , +(9) +where ι is the inclination angle of the SMBHB, dL is +the luminosity distance to the source, ω(t) and Φ(t) are +the time-dependent angular orbital frequency and phase, +respectively, and M ≡ (m1m2)3/5 / (m1 + m2)1/5 is a +combination of the two black hole masses known as the +chirp mass. Again, note that the forms of these signals +have been reorganized compared to those used in Aggar- +wal et al. (2019); due to the rotated frame of the antenna +pattern functions now in use, they are equivalent. The +variables M and ω refer to the redshifted values of these +quantities, which relate to the rest-frame versions Mr +and ωr as +Mr = +M +1 + z , +ωr = ω(1 + z). +(10) +However, PTAs are currently only sensitive to individual +SMBHBs in the local universe where (1 + z) ∼ 1. +For a CW, the initial orbital angular ω0 frequency +is related to the GW frequency by ω0 = πfGW, where +ω0 = ω(t0). For this search, we define the reference time +t0 as MJD 57933 (2017 June 29), the last observation +date for the 12.5-year data set. +The time-dependent +orbital phase and frequency of the binary are given by +Φ(t) = Φ0 + 1 +32M−5/3 � +ω−5/3 +0 +− ω(t)−5/3� +, +ω(t) = ω0 +� +1 − 256 +5 M5/3ω8/3 +0 +t +�−3/8 +, +(11) +where Φ0 refers to the initial orbital phase (Arzouma- +nian et al. 2014). To account for the evolution of high +chirp mass binaries over our observations, rather than +assuming that there is no frequency evolution, we use +the full expression for ω(t) as in Aggarwal et al. (2019). +2.2.2. Noise Model +For each individual pulsar, we model both white and +red noise. We use a white noise model that is identi- +cal to that used in previous NANOGrav analyses, using +three parameters: EFAC, EQUAD, and ECORR. EFAC +scales the template-fitting TOA uncertainties induced +by finite pulse signal-to-noise ratios by a multiplicative +factor, EQUAD adds white noise in quadrature, and +ECORR describes white noise that is correlated across +TOAs derived from data collected simultaneously (Lam +et al. 2017). +For consistency with previous NANOGrav analyses, to +model individual pulsar red noise, the noise spectrum is +divided into 30 linearly spaced bins, ranging from 1/Tobs +to 30/Tobs, where Tobs is the total observation baseline +for each pulsar. Then, the power spectral density of the +red noise is fit to a power-law model as in Shannon & +Cordes (2010); Lam et al. (2017), where +P(f) = A2 +red +12π2 +� f +fyr +�−γred +yr3. +(12) +Here, fyr ≡ 1/(1 year), Ared is the red noise amplitude, +and γred is the power law spectral index. The prior on +Ared is log-uniform in the range [−20, −11], while the +prior on γ is uniform in the range [0,7]. +As mentioned above, for the first time, a CRN signal is +now detectable in the 12.5-year data set (Arzoumanian +et al. 2020a). Because of this, we included a CRN term +in our signal model for a portion of our analyses. The +results of searches that only model a CW necessitated +this addition, and are described in detail in section 3. +The power spectral density of the CRN +P(f) = A2 +CRN +12π2 +� f +fyr +�−γCRN +yr3, +(13) + +6 +The NANOGrav Collaboration +takes the same form as that of the pulsar red noise in +Equation 12, but with an amplitude ACRN and spectral +index γCRN that are common to all of the pulsars in the +array. +2.3. Bayesian Methods +We utilized Bayesian inference techniques to deter- +mine the posterior distributions of GW parameters. In +previous CW analyses (Arzoumanian et al. 2014; Ag- +garwal et al. 2019), these results were compared to a +frequentist metric, the Fp statistic (Ellis et al. 2012) to +confirm our key results. However, as this method does +not currently account for a common process other than +a CW in the data, more development will be necessary +to produce reliable frequentist results on the 12.5-year +data set. Therefore, in this work, we will focus solely on +the Bayesian searches, and the frequentist analyses will +be presented in a future work. +In each analysis, we include the BayesEphem model +(Vallisneri et al. 2020) to account for the uncertain- +ties in the Solar System ephemeris, which, as first de- +scribed in Arzoumanian et al. (2018), can have large +impacts on the computation of GW upper limits with +PTAs. +We used DE438 (Folkner & Park 2018) plus +BayesEphem to transform from individual observatory +reference frames to an inertial frame centered at the So- +lar System Barycenter. +As in previous NANOGrav CW searches, we use the +enterprise (Ellis et al. 2019) package to construct the +priors and evaluate the likelihood, which takes the same +form as in Aggarwal et al. (2019) and Arzoumanian +et al. (2014). The Markov Chain Monte Carlo (MCMC) +sampler package PTMCMCSampler (Ellis & van Haasteren +2017) was used to explore the parameter space. +The CW signal model can be described by nine global +parameters: +{θ, φ, fGW, Φ0, ψ, i, M, dL, h0}, +(14) +which describe the circular SMBHB’s: +• position on the sky (θ, φ); +• GW frequency, related to the orbital frequency at +some reference time (fGW); +• orbital phase at some reference time (Φ0); +• GW polarization angle (ψ); +• orbital inclination (ι); +• chirp mass (M); +• luminosity distance (dL); +• strain amplitude (h0), which is related to the chirp +mass, GW frequency, and luminosity distance. +Since h0 can be defined as +h0 = 2M5/3(πfGW )2/3 +dL +, +(15) +there is a degeneracy between h0, M, fGW, and dL, +and therefore only eight of these parameters are required +to fully describe the global CW signal. The following +types of searches use a variety of prior setups to sample +the necessary eight global parameters, and are described +below and summarized in Table 1. +As in Aggarwal et al. (2019), to determine if a CW +has been detected by any of our analyses, we first per- +formed a detection analysis with the priors described in +Table 1, with the key difference between this and upper +limit analyses being a log-uniform prior on the strain +amplitude of the CW. Then, we calculated the Bayes +factor using the Savage-Dickey formula (Dickey 1971), +B10 ≡ evidence [H1] +evidence [H0] = +p (h0 = 0 | H1) +p (h0 = 0 | D, H1). +(16) +Here, H1 is the model with a CW, H0 is the model +without one, p (h0 = 0 | H1) is the prior at h0 = 0, and +p (h0 = 0 | D, H1) is the posterior at h0 = 0. +Since +H1 and H0 are nested models (i.e., H0 is H1 : h0 = +0), we used the Savage-Dickey formula to estimate +p (h0 = 0 | D, H1) as the average fraction of samples in +the lowest-amplitude bin in a histogram of h0 samples +for a range of bin sizes. +We then computed the one- +sigma error on the Bayes factor as +σ = B10 +√n , +(17) +where n is the number of samples in the lowest- +amplitude bin. +As with the Bayes factor values, the +average error is computed for a range of histogram bin +sizes. +Throughout this work, we computed 95% upper limits +as the 95th percentile of relevant strain (or chirp mass, +for multi-messenger analyses) posterior distributions. +For these analyses, a uniform prior on the strain am- +plitude is used, which translates to a linear-exponential +(LinExp) prior on log10h. The error on the 95% upper +limit, due to the finite number of samples, is calculated +as +σUL = +� +x(1 − x)/Ns +p +� +h0 = h95% +0 +| D +�, +(18) +where x = 0.95 and Ns is the number of effective samples +in the MCMC chain. +2.3.1. All-Sky Searches +To search for GWs from SMBHBs located in any di- +rection, we use uniform priors on the source sky position +(cos θ, φ), as well as the cosine of the source inclination +cos ι, polarization angle ψ, and GW phase Φ0. We used +log-uniform priors on h0 for detection analyses, and uni- +form priors on h0 for upper limit analyses, so as to set + +NANOGrav 12.5-year Continuous Wave Limits +7 +the most conservative upper limit. +For both analysis +types, priors on log10(h0) span the range [−18, −11], +which accounts for an over-conservative range around +the sensitivity of the most recent data sets (order −15), +and the minimum of which is well below our sensitivity. +We performed many searches at fixed values of fGW, +to evaluate detection statistics and our sensitivity across +the entire nanohertz GW band. The lowest frequency +value was set by the time span of our data set, fGW = +1/(12.9 years) = 2.45 × 10−9 Hz. +The highest fre- +quency value is limited by the observation cadence +of our data (approximately one observation per 2–4 +weeks). +However SMBHBs at that frequency, at the +mass range where their strains would be large enough +to be detectable by PTAs, have exceedingly short inspi- +ral timescales (a few weeks up to ∼ 3 months). Thus, +they are unlikely to be detectable in our data set (Islo +et al. 2019; Aggarwal et al. 2020). Therefore, we set our +maximum frequency to 3.178 × 10−7 Hz (equivalent to +one GW cycle every ∼ 36 days and a GW inspiral time +of ∼ 34 days). This is the same high-frequency cutoff +value used in Arzoumanian et al. (2014); Aggarwal et al. +(2019). +For most of the frequency band, we searched over +log10(M/M⊙) with a log-uniform prior with a range +of [7, 10]. +However, for very high-frequency sources, +we limit the maximum value of the prior to account +for high-chirp-mass binaries never emitting GWs at the +highest frequencies in our band, as they will have merged +prior to emitting GWs at the searched frequency. This +cutoff is relevant at fGW ≥ 1.913times10−7 Hz. Assum- +ing binaries merge when the orbital frequency is equal +to the innermost stable circular orbit (ISCO) frequency, +M must satisfy +Mmax ≤ +1 +63/2πfGW +� +q +(1 + q)2 +�3/5 +, +(19) +where q is the SMBHB mass ratio. Here, we calculated +the chirp mass cutoff for q = 1. +2.3.2. Sky Map +Due to the non-uniform distribution of pulsars on the +sky, the NANOGrav PTA is not equally sensitive in all +directions. To analyze the differences in sensitivity, once +detection analyses were completed, we placed upper lim- +its on 768 pixels distributed isotropically across the sky +using healpy (G´orski et al. 2005; Zonca et al. 2019); +each pixel covers an area of 53.72 square degrees. This +value is chosen to optimize healpy’s requirements for +map transformations with our desired resolution. We +allowed the sampler to search a uniform prior across +each of the 768 pixels, so as to still sample the entire +sky across the entire analysis. +Due to the large computational cost required to con- +duct 768 independent runs, the sky map is created at +only a single frequency, and only upper limits are com- +puted. We selected 7.65 × 10−9 Hz, as it was the most +sensitive in the sky-averaged analysis. As this is in the +low-frequency regime where we expect the inclusion of +the CRN to be significant, it is included in our signal +model. +All other modeling is done identically to sec- +tion 2.3.1, and is summarized in Table 1. +2.3.3. Targeted Search +In addition to the two variations of searches described +above, we also perform a targeted search for two known +SMBHB candidates, 3C 66B and HS 1630+2355. Rather +than a search for a generic SMBHB within a nearby +galaxy cluster, as was done in Aggarwal et al. (2019) +and Arzoumanian et al. (2021b), here, we targeted these +binary candidates directly. 3C 66B was the subject of +Arzoumanian et al. (2020b), and was first identified be- +cause of observed orbital motion in the AGN core (Su- +dou et al. 2003). Here, we were able to provide an up- +dated analysis with the addition of new data included in +the 12.5-year data set. HS 1630+2355. was first iden- +tified as a periodic quasar in (Graham et al. 2015), and +was identified as a top PTA CW candidate in Xin et al. +(2021) due to it’s location near our best-timed pulsars. +For the targeted search, we perform detection and up- +per limit analyses in the same way as in section 2.3.1, +with a few differences in the model priors. +Because +we know the sky location and luminosity distance to +3C 66B, as well as a frequency estimate, these parame- +ters are set to constants in this search. This allows us to +place constraints directly on the (observer-frame) chirp +mass of the binary, rather than its GW strain amplitude. +For a detection analysis, the prior on log10 +� +M/M⊙ +� +is +log-uniform in the range [7, 10], while for upper limit +analyses, the prior is uniform over this range. The re- +maining priors are identical to the above analyses, and +are summarized in Table 1. +2.3.4. Pulsar Distance Priors +In this work, we adopted a data-driven approach to +handle the large uncertainties on pulsar distance mea- +surements, which, in addition to a phase at each pulsar, +affect the modeling of the pulsar terms of the CW sig- +nal. As in previous searches, the pulsar distance was +used as a free parameter in the search. This allowed us +to marginalize over the pulsar distance, and avoid in- +correct modeling of the signal at the the location of the +pulsar. +In previous versions of this search (e.g. Aggarwal et al. +2019; Arzoumanian et al. 2020b), the pulsar distance +prior was constructed from a Gaussian scaled to the + +8 +The NANOGrav Collaboration +All-Sky +Sky Map +Targeted +Analysis Type +Detection +Upper Limit +Upper Limit +Detection +Upper Limit +CRN +Y/N +Y/N +Y +Y/N +Y/N +log10h +Uniform(–18,–11) +LinExp(–18,–11) +LinExp(–18,–11) +– +– +log10M +Uniform(7,Mmax) +Uniform(7,Mmax) +Uniform(7,Mmax) +Uniform(7,Mmax) +LinExp(7,Mmax) +log10dL +– +– +– +Constant +Constant +log10fGW +Constant (many) +Constant (many) +Constant (single) +Constant +Constant +φ +Uniform(0,2π) +Uniform(0,2π) +Uniform(pixel) +Constant +Constant +cos θ +Uniform(–1,1) +Uniform(–1,1) +Uniform(pixel) +Constant +Constant +ψ +Uniform(0,π) +Uniform(0,π) +Uniform(0,π) +Uniform(0,π) +Uniform(0,π) +Φ0 +Uniform(0,2π) +Uniform(0,2π) +Uniform(0,2π) +Uniform(0,2π) +Uniform(0,2π) +cos ι +Uniform(–1,1) +Uniform(–1,1) +Uniform(–1,1) +Uniform(–1,1) +Uniform(–1,1) +Table 1. CW parameter priors for each analysis. +parallax distance and associated uncertainty listed in +Verbiest et al. (2012); if no distance was listed, a value +of 1.0 ± 0.2 kpc was assumed. While this assumption +is reasonable while placing upper limits (see discussion +within Arzoumanian et al. 2020b), as the PTA reaches +sensitivities where a detection is nearly possible, an im- +provement was needed. +In this work, every pulsar distance prior was con- +structed from a measurement or estimate. If a pulsar +had a significant independent parallax measurement1, +such as from Very Long Baseline Interferometry (VLBI), +or timing parallax measured in the 12.5-year data set, +this value was used to construct a prior on pulsar dis- +tance (L). +p(L) = +1 +√ +2πσϖL2 exp +�−(PX − L−1)2 +2σ2ϖ +� +, +(20) +which inverts the approximately Gaussian shape of a +parallax prior to describe the prior for distance (Vige- +land & Vallisneri 2014). Here, significance was defined +by the parallax measurement (ϖ) having an associated +uncertainty (σϖ) of less than 30%, so as to avoid the +introduction of any errors due to the Lutz-Kelker bias +(Lutz & Kelker 1973). If multiple measurements of suffi- +cient quality existed, these values and uncertainties were +combined with a weighted average before being used to +construct the parallax-distance prior, which ensures that +the highest-quality measurements contribute the most +to the resulting prior. +1 http://hosting.astro.cornell.edu/research/parallax/, with values +compiled from Ding et al. (2020); Jennings et al. (2018); Deller +et al. (2019); Guillemot et al. (2016); Stovall et al. (2014); Abdo +et al. (2013); Freire et al. (2012); Verbiest et al. (2009); Lazaridis +et al. (2009); Chatterjee et al. (2009); Hotan et al. (2006); Lom- +men et al. (2006); Jacoby et al. (2005); Splaver et al. (2005); +L¨ohmer et al. (2004); Toscano et al. (1999); Camilo et al. (1994) +If there are no parallax measurements that could be +used to calculate the pulsar’s distance, the pulsar’s dis- +persion measure (DM) was used to construct a distance +estimate using NE2001 (Cordes & Lazio 2002) and, sub- +sequently, the distance prior. +Since these values are +only an estimate, we constructed a broad, nearly uni- +form prior for the DM-distance value and a 20% un- +certainty (Cordes & Lazio 2002; Jones et al. 2017; Lam +et al. 2016), with the shape +p(L) = +� +� +� +� +� +Half − Gaussian if L < 0.8 LDM +Uniform +if 0.8 LDM ≤ L ≤ 1.2 LDM +Half − Gaussian if L > 1.2 LDM +(21) +Here, the Half-Gaussian additions have standard de- +viations of 0.25× the DM-distance uncertainty. Unlike a +sharp boundary, these additions allowed the sampler to +move into the edges of this prior range, which accounted +for any differences in distance estimates by alternative +electron density models, such as Yao et al. (2017). While +pulsar distance priors will still only induce minor in- +fluences on the results of an upper limit analysis (Ar- +zoumanian et al. 2020b), by constructing new priors to +accurately handle pulsar distance measurements and es- +timates, we have prepared our methods for a future de- +tection of a CW, which will be more reliant on the pulsar +term of the signal than upper limit evaluations. These +values and the priors used are compiled in Table 2. +3. RESULTS +3.1. All-Sky Searches +For each GW frequency in our search, we performed a +detection analysis on the 12.5-year data which marginal- +ized over the source sky location. Figure 1 shows the +Bayes factor for a CW at each searched GW frequency +in purple. It is important to note the large Bayes fac- +tor for fGW = 2.45 × 10−9 Hz (the lowest frequency + +NANOGrav 12.5-year Continuous Wave Limits +9 +10−8 +10−7 +fGW (Hz) +1 +10 +B10 +CW+CRN +CW +Figure 1. Savage-Dickey Bayes factors for a CW at each +GW frequency. At low frequencies, inclusion of a CRN in +the model (red) is necessary to avoid a false CW detection +as in the CW-only model (unfilled purple). Square markers +indicate a frequency where the initial analysis returned an +undefined Savage-Dickey Bayes Factor, meaning the zoom-in +analysis was necessary to calculate an accurate Bayes factor. +With these methods, we found that no CWs are detected in +the 12.5-year data set. +analyzed), with a steady decrease in the following four +frequency bins. Ordinarily, this would be a first indi- +cation for the detection of a CW. However, given the +strong evidence for the existence of a CRN process in +the 12.5-year data set (Arzoumanian et al. 2020a), it is +clear that this signal appears to be of similar form; that +is, what we have detected is bright at low frequencies and +declines toward higher frequency. Once a common red- +noise process is added to the model, with the log10ACRN +and γCRN parameters fixed to the maximum likelihood +values (−15.80 and 6.08, respectively) found by a search +analogous to Arzoumanian et al. (2020a), the Bayes fac- +tors for a CW at low fGW return to < 1 (leftmost red +points in the figure). Therefore, throughout this paper, +we will present the results of many analyses with a fixed +CRN included in our model. +We note that a few frequencies above fGW = 1×10−7 +Hz have B10 values that are returned as undefined. How- +ever, upon inspection, this is due to poor sampling in +a few frequency bins, where the sampler does not ex- +plore low strain values, rather than a detection of a CW. +This occurs in areas of parameter space where the like- +lihood is particularly complex and difficult to explore +in a finite run-time due to the numerous noise sources +at fGW > 1 × 10−7 Hz, such as covariances between +the CW likelihood with pulsar binary orbits and poten- +tial unmodeled red noise above the 30-frequency power +law cutoff (Chalumeau et al. 2022) Therefore, a few ele- +vated Bayes factors are not unexpected. To mitigate this +effect, we adapt the methodology described in Chatzi- +ioannou et al. (2014) to use a second MCMC analysis +to “zoom in” on the low end of the strain prior range +by limiting the prior to the 10th percentile of the origi- +nal posterior. Therefore, the posterior height at h0 = 0 +becomes +p (h0 = 0 | D, H1) = n2 +N2 +n1 +N1 +1 +dh, +(22) +with fractional uncertainty +� +1 +n1 ++ 1 +n2 +, +(23) +where N1 is the number of samples in the initial run +and n1 is the number of samples in the focused region +(defined as the 10th percentile of the initial run). Then, +N2 is the number of samples in the focused run, with +n2 of those samples located in the lowest-amplitude bin +of width dh. After this procedure, all frequencies have +Bayes factor values of B10 ≲ 10. +The only frequency that needed this treatment for +both the CW and CW+CRN models is 1.763 × 10−7 +Hz, which resulted in a Bayes factor of 15.43 in the +CW+CRN case, and 7.79 in the CW-only case. While +we inspected our analyses at this frequency with extra +care, these Bayes factors are still relatively low com- +pared to those required to claim a detection, especially +since binaries at these high frequencies are expected to +be quite rare (Kelley et al. 2018; B´ecsy et al. 2022b). +For comparison, evidence in favor of a given model is +generally not considered strong for Bayes factors ≲ 100 +(Kass & Raftery 1995). Therefore, we will monitor this +frequency in future data sets, but currently, our analy- +ses indicate that no CWs are detected in the 12.5-year +data set. +As we found no strong evidence for a GW from an in- +dividual SMBHB in the 12.5-year data set, we proceeded +to place all-sky upper limits on the GW strain, with re- +sults shown in Figure 2. We again conduct this analysis +using two different models, one which includes only a +CW (purple) and one which includes both a CW and +a CRN process (red). +While in both cases, the most +sensitive frequency (that with the lowest strain upper +limit) is 7.65×10−9 Hz, the strain upper limits are lower +when the CRN is included in the model. In this case, +we can limit the strain to h0 < (6.82 ± 0.35) × 10−15, +while when the CRN is neglected, the best limit we can +place on CW strain is h0 < (9.11 ± 0.10) × 10−15. This +trend of the CW+CRN model resulting in lower upper +limits than a CW-only model continues until frequen- +cies of approximately 1 × 10−8 Hz, above which, where +the effect of the power-law CRN is minimal, the upper +limit values are nearly equal. Therefore, throughout the +remainder of this work, we opted to include the CRN + +10 +The NANOGrav Collaboration +10−8 +10−7 +fGW (Hz) +10−14 +10−13 +10−12 +GW Strain Upper Limit +12.5-year, CW +12.5-year, CW+CRN +Figure 2. All-sky CW strain 95% upper limits and associated error regions, with (red) and without (purple) a CRN included +in the model. At low frequencies, modeling the CRN is necessary to avoid over-estimating our strain upper limits. We are the +least sensitive to CWs at fGW =1/(1 year) due to the Earth’s orbit, creating the large feature seen in this and other figures. +10−8 +10−7 +fGW (Hz) +10−14 +10−13 +10−12 +10−11 +GW Strain Upper Limit +5-year +9-year +11-year +12.5-year, fixCRN +Figure 3. The upper limits on CW strain are continuing to +decrease. The 12.5-year data set (red curve and error region) +is more sensitive than the 11-year, 9-year, and 5-year (blue, +orange, and blue curves, respectively) at high frequencies. At +the most sensitive frequency of fGW =7.65 × 10−9 Hz, the +CRN is impeding further sensitivity improvements, and up- +per limits are comparable between the 12.5-year and 11-year +data sets. At frequencies greater than fyr, the NANOGrav’s +sensitivity has improved by a factor of 1.40 since the 11-year +data set. +in analyses which are too computationally expensive to +be completed with both models, such as the sky map +analyses described in section 3.2. +In Figure 3, we compare this result to those of pre- +vious NANOGrav searches for CWs (Aggarwal et al. +2019). While analyses have shown a factor of ∼ 2 im- +provement between the previous three data sets, we see +only a modest sensitivity improvement between the 11- +year and 12.5-year data, with only a factor of 1.07 be- +tween the two lowest strain limits. In addition to the +smaller fractional increase in observing baseline between +the 11- and 12.5-year data sets as compared to previous +data sets, this is likely due to the presence of the CRN, +which, while it is no longer causing a false positive in +the CW search if included in the model, does represent +a significant noise process that will limit our sensitivity +to low-frequency CWs over the years to come (Hazboun +et al. 2019b). +To confirm this hypothesis, we calculated the sensitiv- +ity curves of the 9-, 11-, and 12.5-year data sets using +each pulsar’s red and white noise contributions and tim- +ing model with hasasia (Hazboun et al. 2019a,b) and +calculated the relative improvement of in sensitivity be- +tween each data set at high frequencies (> fyr), where +red noise has little effect. We observed that on average, +the hasasia-calculated sensitivity at these frequencies +improved by a factor of 1.28 between the 9- and 11-year +data sets, and 1.24 between the 11- and 12.5-year data +sets. In our full Bayesian analysis, our upper limits at +frequencies above fyr improved by a factor of 1.52 be- +tween the 9- and 11-year data sets, and 1.40 between +the 11- and 12.5-year data sets. These proportionalities +are even greater than our calculated improvements, so +we are able to conclude that NANOGrav’s sensitivity to +CWs is improving as expected at high frequencies where +red noise is not dominant. +3.2. Sky Map +In Figure 4, we show the GW strain upper limits for +a model including a CRN at the most sensitive CW fre- +quency fGW = +7.65 × 10−9 Hz as a function of sky + +NANOGrav 12.5-year Continuous Wave Limits +11 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +GW Strain Upper Limit +×10−14 +Figure 4. Map of CW strain 95% upper limits at fGW = 7.65 × 10−9 Hz, the most sensitive frequency searched, for the +12.5-year data set. Pulsar locations are shown as white stars, with new pulsars added from the 12.5-year data set outlined in +red. The most sensitive pixel is marked with a red dot, and is located at an RA of 19h07m30s and a Dec of −30◦00′00′′. In this +region, where the our best-timed pulsars lie, our upper limits are nearly an order of magnitude more sensitive than the least +sensitive pixel. +location. As expected, the portion of the sky that is the +least sensitive to CWs is that which contains the fewest +pulsars. At the most sensitive pixel, the strain upper +limit is h0 < (2.66 ± 0.15) × 10−15, while at the least +sensitive pixel, h0 < (1.12 ± 0.05) × 10−14, a range of +sensitivities that varies by a factor of ∼ 4. +In Figure 5, we compare the 12.5-year CW strain map +to that constructed in Aggarwal et al. (2019) for the +11-year data set by plotting ∆h95 = h95,12.5 − h95,11. +While a portion of the sky shows a significant reduction +in strain upper limits, many pixels show an increase in +strain upper limit, indicating a loss of sensitivity in the +newest data set for much of the sky at our most sensitive +frequency, including in the most sensitive area of the sky. +To investigate the cause of this apparent sensitivity +loss, we conducted an analysis of the simulated data +utilized in Pol et al. (2021). We selected portions of the +data set with included pulsars and observation baselines +corresponding to the 11- and 12.5-year data sets that +also included a CRN corresponding to that found in Ar- +zoumanian et al. (2018). Then, we conducted identical +upper limit analyses for an equatorial slice of sky pixels +(i.e., for the pixels with θ ∼ π/2). When plotted against +φ in Figure 6, the patterns in ∆h95 in the real data are +well within the range represented by the same analysis +in the 10 simulated data sets, each containing a different +realization of the CRN. The mean value of ∆h95 across +each included pixel is nearly identical for the real data +−4 +−2 +0 +2 +4 +∆h95 +×10−15 +Figure 5. Difference in strain 95% upper limits for the 12.5- +year data set versus the 11-year data set at our most sensitive +frequency. +Blue pixels indicate a decrease in upper limit, +while red pixels indicate an increase. The overall increase +in upper limit across much of the sky at the most sensitive +frequency was found to be due to the presence of the CRN, +and is consistent with the all-sky limit shown in Figure 3. +and the simulations. Together, this allows us to confi- +dently state that this apparent pattern in our evolving +sensitivity across the sky is due to the emerging CRN. +4. ASTROPHYSICAL LIMITATIONS OF NEARBY +SMBHBS +In recent years, numerous studies have modeled the +SMBHB population in the nearby universe (Simon et al. +2014; Rosado & Sesana 2014; Mingarelli et al. 2017; Ar- + +12 +The NANOGrav Collaboration +0h +5h +10h +15h +20h +RA +0 +1 +2 +3 +4 +5 +6 +φ +−1 +0 +1 +2 +3 +∆h95 +×10−14 +Realization Averaged +Realization Range +12.5-year - 11-year +Figure 6. The difference in strain upper limits for an equa- +torial slice of the sky map shown in Figure 5 plotted against +φ (or RA). The results for the real data (red points) are +well within the range of values encompassed by the ten real- +izations simulated (blue), with near-identical mean values of +∆h95 (horizontal red and blue lines). Therefore, we conclude +that the overall increase in upper limit across much of the +sky at our most sensitive frequency is due to the 12.5-year +data set’s sensitivity to the CRN. +zoumanian et al. 2021b) and multiple SMBHB candi- +dates have been discovered with electromagnetic tech- +niques (Sudou et al. 2003; Graham et al. 2015; Hu et al. +2020; Lehto & Valtonen 1996; Charisi et al. 2016; Liu +et al. 2019). Even without a CW detection, our lim- +its can add crucial insights into SMBHH populations, +including limiting the distance to nearby SMBHBs and +placing multi-messenger mass constraints on SMBHB +candidates. +4.1. Distance Limits +Our limits on CW strain can be transformed using +Equation 15 to calculate the 95% lower limit on the +luminosity distance to a source of a given chirp mass. +The distance limits for an SMBHB with M = 109M⊙ +are shown in Figure 7. For the most sensitive frequency +of fGW = 7.65 × 10−9 Hz, we can limit the distance +to an SMBHB with M = 109M⊙ to dL > 33.85 Mpc. +These limits may be scaled to larger or smaller SMBHBs +directly using Equation 15 as +D95,M = D95,109M⊙ × +� +M +109M⊙ +�5/3 +. +(24) +However, it is important to note that while this fre- +quency produces the lowest strain upper limit, it does +not produce the farthest luminosity distance lower limit. +This value is dL > 34.99 Mpc at fGW = 3.817 × 10−8 +Hz. +This technique can be applied to the strain upper +limit sky map as well, to calculate the 95% luminos- +10−8 +10−7 +fGW (Hz) +10−1 +100 +101 +� +M +109M⊙ +�5/3 +× D95 (Mpc) +5-year +9-year +11-year UL +12.5-year, CW+CRN +Figure 7. The 95% lower limits on the luminosity distance +to an individual SMBHB. While we can limit SMBHBs emit- +ting GWs at the most sensitive value of fGW = 7.65×10−9 Hz +to dL > 33.85 Mpc, at fGW = 3.817 × 10−8 Hz, they can be +limited to farther away at dL > 34.99 Mpc. +30 +40 +50 +60 +70 +80 +� +M +109M⊙ +�5/3 +× D95 (Mpc) +Figure 8. Map of the 95% lower limit on the distance to +individual SMBHBs with M = 109M⊙ and 7.65 × 10−9 Hz. +White diamonds indicate the positions of known SMBHB +candidates and large galaxy clusters that could contain an +SMBHB. As PTA sensitivities improve, these candidates may +come into reach. +ity distance lower limit for an SMBHB emitting CWs +at fGW =7.65 × 10−9 Hz as a function of sky loca- +tion. The results of this transformation are shown in +Figure 8. +At the most sensitive sky location, we can +limit the minimum distance to an M = 109M⊙ SMBHB +to be dL > 86.65 Mpc, and that to an M = 1010M⊙ +SMBHB to dL > 4.02 Gpc. +In the least sensitive +sky location, we can limit the minimum distance to an +M = 109M⊙ SMBHB to be dL > 20.50 Mpc, and that +to an M = 1010M⊙ SMBHB to dL > 0.95 Gpc. These +values vary by over a factor of 4 between the most and +least sensitive parts of the sky. +4.2. SMBHB Number Density Limits + +NANOGrav 12.5-year Continuous Wave Limits +13 +10−8 +10−7 +fGW (Hz) +10−6 +10−3 +100 +103 +106 +Number Density Upper Limit +(cMpc−3) +Chirp Mass [log10(M/M⊙)] +8.0 +8.5 +9.0 +9.5 +10 +1 +0.1 +fGW (yr−1) +Figure 9. Number density limits of SMBHBs per comoving +Mpc−3 with chirp masses of 108 (blue), 108.5 (orange), 109 +(green) and 109.5 (red) solar masses. We placed significantly +more stringent upper limits on the largest SMBHBs than the +smallest ones in the local universe. +Using our limits on the luminosity distance to an +SMBHB, we can also place limits on the local num- +ber density of SMBHBs of a given binary configuration. +After placing a lower limit on the effective comoving +distance dc to sources of given binary parameters, we +can say the local density is less than nc = 1/Vc = +[(4/3)πd3 +c]−1. However, to consider this as a limit on +the average density in some volume, that is relatively- +local but larger than the explicitly measured volume, +there should be some additional pre-factor to account +for the confidence of having a source within this volume +based on Poisson distributions of sources. For a num- +ber of events Λ = ncVc the likelihood of no detections is +P0(Λ) = e−Λ. To find an upper-limit on the occurrence +rate, ΛUL, we must integrate from that limit to infin- +ity, such that the result matches our desired confidence +level p0. Therefore, FUL(ΛUL) = +� ∞ +ΛUL e−ΛdΛ = 1 − p0 +is solved as +nul = − ln(1 − p0) +Vc +. +(25) +Here, our desired confidence level is p0 = 0.95. To cal- +culate the co-moving distance dc, we transform our lu- +minosity distance limits (shown in Figure 7) as dc = +dL/(1 + z), and z is calculated for the relevant luminos- +ity distance values using the astropy. +The results of this calculation are shown for various +SMBHB chirp masses in Figure 9. As can be expected, +we find that we can place more constraining upper limits +on large SMBHBs (M = 109.5M⊙) than smaller ones +(M = 108M⊙) in the local universe. +4.3. Multi-Messenger Analyses +Using the methodology described in section 2.3.3, we +conducted a multi-messenger search for GWs from the +0.0 +0.5 +1.0 +1.5 +2.0 +M/M⊙ +×109 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +Posterior +×10−9 +12.5-year Upper Limit = 1.34×109M⊙ +12.5-year Upper Limit, w/ CRN = 1.41×109M⊙ +11-year Upper Limit = 1.65 × 109M⊙ +Iguchi et al. (2010) M +Figure 10. Posterior distributions for a targeted upper limit +analysis of the SMBHB candidate 3C 66B. While 95% upper +limits (red and purple lines) are lower than in the 11-year +data set (blue line), they cannot rule out the model from +(Iguchi et al. 2010) (orange region). +SMBHB candidate 3C 66B to provide an update to the +results of (Arzoumanian et al. 2020b). The detection +analyses result in nearly identical Savage-Dickey Bayes +factors, whether the CRN was included or not. This is +to be expected, as the CRN is very weak at frequen- +cies as high as that of 3C 66B (fGW = 6.04 × 10−8 +Hz). The Bayes factors for the CW-only analysis and +the CW+CRN analysis are 0.70 ± 0.02 and 0.67 ± 0.01, +respectively. Both of these values are very near 1, mean- +ing that the data do not indicate the presence of a CW +corresponding to a binary within 3C 66B. +Because no GW was detected, we constrain the chirp +mass of a potential binary with an upper limit analysis, +again performed with and without a CRN to confirm +consistency. The posteriors from these two searches are +plotted in Figure 10, with resulting 95% upper limits +of (1.41 ± 0.02) × 109M⊙when a CRN is included, and +(1.34±0.01)×109M⊙when only CWs are included in the +signal. For comparison, the 95% chirp mass upper limit +for 3C 66B from the 11-year data set was 1.65×109M⊙. +This represents an improvement of 2.4×108M⊙, or a fac- +tor of 1.2 smaller; by adding pulsars, extending timing +baselines, and improving timing and searching methods, +the PTA’s sensitivity has clearly improved. These up- +per limits are nearer to the value of the upper bound of +the Iguchi et al. (2010) chirp mass estimate. In subse- +quent data sets, or by using more sophisticated analyses +such as advanced noise modeling (Simon & Hazboun in +prep), this error region may soon be within reach. +In Arzoumanian et al. (2020b), it was shown that a +targeted search, like this analysis, results in a factor of + +14 +The NANOGrav Collaboration +∼ 2 reduction in upper limits compared to those of an +all-sky search at a corresponding GW frequency. When +converted to strain amplitudes rather than chirp masses, +the 95% upper limits are 1.90×10−14 and 1.74×10−14 for +the searches with and without a CRN, respectively. In +comparison, the all-sky analysis in section 3.1 returned +strain upper limits of 3.56 × 10−14 and 3.82 × 10−14 +at 6.01times10−8 Hz, the nearest frequency to that of +3C 66B at 6.04times10−8 Hz. These all-sky strain up- +per limits are a factor of 1.88 and 2.20 larger, very sim- +ilar to the value for the 11-year data set. +Therefore, +the improvement in upper limits gained by using this +multi-messenger technique has stayed stable across the +addition of new pulsars, more data, and the emergence +of the CRN. +Additionally, we performed a new search for the elec- +tromagnetic SMBHB candidate HS 1630+2355. +First +identified as a periodic quasar in Graham et al. (2015), +this candidate is identified as a top PTA CW candidate +in Xin et al. (2021) with a gravitational wave frequency +of 1.13×10−8 Hz and a luminosity distance of 5.26 Gpc. +In the 12.5-year data set, we do not detect any CWs from +HS 1630+2355; in a CW+CRN analysis (necessary due +to the low GW frequency), we calculate a Bayes factor +of 0.74±0.02. Then, we are able to set an upper limit of +(1.28±0.03)×1010M⊙ on the chirp mass of an SMBHB +within HS 1630+2355, which corresponds to a strain of +4.03 × 10−15. For comparison, the all-sky upper limit at +the nearest frequency of 1.10×10−8 Hz is 1.07×10−14, a +factor of 2.66 larger than the targeted upper limit. Due +to this candidate’s favorable position near the PTA’s +most sensitive sky location, we are able to overcome the +much larger source distance to set a constraining upper +limit. However, this limit is still approximately 4 times +larger than the estimated chirp mass of 3.15 × 109M⊙ +(Xin et al. 2021), meaning that more data are needed to +rule out or detect an SMBHB within HS 1630+2355. +4.4. Local Detection Prospects +At the most sensitive sky pixel, we conducted a final +upper limit analysis across the entire frequency band, +with results plotted in Figure 11. Here we observed that +for all frequencies, the PTA is dramatically more sensi- +tive to CWs from sources at this sky location than across +the entire sky on average. Mingarelli et al. (2017) carried +out a comprehensive study of the detection prospects of +SMBHBs within a 225 Mpc volume, the completeness +limit for their chosen K-band luminosity in 2MASS. Us- +ing these new 12.5-year upper limit curves, we assess our +level of surprise at our current non-detection of CWs. +Figure 11 shows an example realization of the local +SMBHB population created with nanohertz gws (Min- +10−8 +10−7 +fGW (Hz) +10−16 +10−14 +10−12 +GW Strain Upper Limit +All-Sky +Best Sky Location +Figure 11. The 95% strain upper limit curve for the all-sky +(solid red) CW search compared with the 95% strain upper +limit curve in the most sensitive sky location (red dashed). +The non-detection of a nearby SMBHB is unsurprising – +there was at best a 0.5% chance of making such a detection. +Here we show a one of the 398 realizations of the local Uni- +verse from Mingarelli et al. (2017) that shows a detectable +SMBHB together with our 95% upper limit curves for both +sky-averaged and best sky locations. In this realization there +are 87 local SMBHBs (all within 225 Mpc); none of them lie +above the sky-averaged upper limit curve, but one could be +detected if it were at the most-sensitive sky location. +garelli 2017). It is one out of 75,000 Monte Carlo re- +alizations Mingarelli et al. (2017) carried out, where +they varied black hole masses via the scatter in vari- +ous M − Mbulge relations, mass ratios, and more. While +the chosen realization shows what a detectable SMBHB +would look like, on average we found only 398 realiza- +tions out of the 75,000 contained detectable SMBHB +systems at the best sky location. +We therefore only +had a 0.5% chance of making a detection of such a lo- +cal source with the 12.5-year data set. +Furthermore, +when we consider the entire sky, we found an order of +magnitude fewer SMBHBs were detectable – only 43 re- +alizations contained detectable binaries. +It is interesting to compare this result to that of our +previous upper limit (Aggarwal et al. 2019). With the +NANOGrav 11-year all-sky upper limits, we found 34 +detectable SMBHBs and here we find 43 — an overall +improvement. However, the upper limit at our best sky +location has deteriorated due to the CRN, which has in +turn decreased the number of detectable binaries by a +factor of ∼ 2, from a 1.2% chance of detection to 0.5%. +As was the case in previous sections, we note that this +deterioration is happening primarily at low frequencies +where the CRN is manifesting in the data, and the most +sensitive sky location is heavily affected (Figure 5 and +Figure 6). Xin et al. (2021) show that at higher GW +frequencies the effect of the GWB, or any equivalent +CRN, is very small, so the detection prospects for local +SMBHBs are unaffected. + +NANOGrav 12.5-year Continuous Wave Limits +15 +12 +10 +8 +6 +4 +log10( +BHB[dex 1 cMpc 3]) +8 +9 +10 +log10( +/M ) +0.0 +0.5 +1.0 +1.5 +z +A +ll +-S +k +y +B +e +st +10 +5 +log10( +BHB) +4.5 +4.0 +log10( +BHB) +Figure +12. +The SMBHB mass function (φBHB) de- +rived from astrophysical models shows the modeled num- +ber density of SMBHBs (color-bar) across log chirp mass +(log10M/M⊙) and redshift (z). Side panels show φBHB in +one dimension integrated across each respective variable. Re- +gions that are inconsistent with our 12.5-year CW search are +shown in white, with the all-sky (average) and most-sensitive +(best) sky location upper limits shown under the solid and +dash-dotted white curves, respectively. Created using meth- +ods from from Casey-Clyde et al. (in prep). +4.5. Binary Population Model Consistency +Finally, it was also useful to assess whether our cur- +rent non-detection of CWs is consistent with expecta- +tions from SMBHB population models. In Figure 12 we +compared an astrophysically-motivated SMBHB model +to GW upper limits set with the 12.5-year CW search. +The SMBHB model was derived from theoretical galaxy +major merger rates (Chen et al. 2019), which are them- +selves based on observed galaxy pair fractions (Mundy +et al. 2017) and theoretical galaxy merger timescales. It +is related to the GWB via (Phinney 2001; Sesana 2013) +h2 +c(f) = 4 +3π +1 +f 4/3 +�� +φBHB(M, z) +M5/3 +(1 + z)1/3 dMdz, +(26) +where hc is the characteristic strain of the GWB and +M is the chirp mass in the observer frame. This was fit +to the results of the NANOGrav search for the GWB in +the 12.5-year data set (Arzoumanian et al. 2020a), and +assumes the CRN is due to a GWB, comparable to the +fit in Middleton et al. (2021). +The GW limits in Figure 12 were calculated using the +most sensitive frequency of both the all-sky and most- +sensitive sky location analyses. +Figure 12 thus shows +what regions of z–M parameter space were accessible +to the 12.5-year CW search. +Since no CWs were de- +tected, we are able to rule out the high-mass and low–z +region across the entire sky and at the most sensitive +sky location for the PTA’s most sensitive frequency. +We calculate the expected number of detectable SMB- +HBs by relating the differential SMBHB mass function +φBHB to the differential number of binaries per chirp +mass, frequency, and redshift (Sesana et al. 2008) as +d3N +d log Mdzdf = +d2φBHB +d log Mdz +dV +dz +dz +dtr +dtr +dfr +dfr +df , +(27) +and integrating across the relevant region of z – M +space, while also considering the entire strain sensitiv- +ity curve in frequency space. Here, tr and fr are the +proper time and binary gravitational wave frequency +in the SMBHB’s rest frame, respectively. +We find in +both cases that the expected number of SMBHBs is +≪ 1. At the all-sky sensitivity, the calculated number +is 0.6+1.1 +−0.4 × 10−4, while at the most sensitive sky loca- +tion, the calculated number is 8.6+12.9 +−5.5 ×10−4. Our non- +detection of a CW signal is thus consistent with theo- +retical models of the SMBHB population, which predict +that the most massive, and therefore loudest, SMBHBs +are exceedingly rare. +5. DISCUSSION AND FUTURE PROSPECTS +While the NANOGrav PTA is continuing to increase +our sensitivity to GWs by adding data from ongoing ob- +servations and adding new pulsars to the PTA, our limits +on CW strains across the nanohertz GW frequency band +and the sky have not improved as steadily as in previ- +ous data sets. This is due to the CRN first detected in +the 12.5-year data set in Arzoumanian et al. (2020a), +which has impacted the PTA’s ability to distinguish a +CW source. While adding a CRN to the search model +that is fixed to the maximum-likelihood values from a +dedicated search avoids confusion in detection analyses, +this adds a significant source of noise to the PTA, and +therefore limits our sensitivity to CWs at frequencies +below 10 nHz. +We have entered an interesting era where surprising +results will continue to be uncovered. In future data sets, +the CRN will likely be even more apparent in the data, +and may eventually resolve to be due to a stochastic +GWB from SMBHBs (Pol et al. 2021). In any case, due +to the multi-frequency nature of the GWB, this will con- +tinue to impact CW searches, and significant efforts will +be needed to continue development on methods that will +allow for efficient detection of both types of nanohertz +GW signals such as in B´ecsy & Cornish (2020), as well +as extensive simulations that evaluate detection possi- +bilities, as in Pol et al. (2021), that include multiple +types of GW signal in the simulated data sets. Addi- + +16 +The NANOGrav Collaboration +tionally, significant effort will be needed to improve sam- +pling methods that can efficiently explore the complex +CW parameter space (B´ecsy et al. 2022a), particularly +at high GW frequencies or if full eccentricity modeling is +desired (Taylor et al. 2016), complexities which will only +be exacerbated as data sets expand. One promising path +forward are targeted searches of quasars, which may be +much more likely to host SMBHBs than random galax- +ies (Casey-Clyde et al. in prep). Since multi-messenger +analyses can improve upper limits by a factor of 2 (Ar- +zoumanian et al. 2020b), improve detection prospects +(Liu & Vigeland 2021; Charisi et al. 2022), and can be +made drastically more efficient than traditional all-sky +searches (Charisi et al. 2022), further development of +these methods is also crucial, as with more data, elec- +tromagnetic SMBHB candidates may soon be detectable +(Xin et al. 2021), and many more will be identified in +upcoming surveys (Charisi et al. 2022; Witt et al. 2022). +By balancing these efforts, a CW signal may soon come +into reach. +6. CONCLUSIONS +With extensive Bayesian analyses, we have searched +the NANOGrav 12.5-year data set for CWs from indi- +vidual SMBHBs. In our detection analyses, we showed +that no CWs were detected to a high degree of confi- +dence. We then placed all-sky upper limits on the strain +amplitude for all CWs emitting between 2.45 × 10−9 Hz +and 3.19×10−7 Hz, as well as upper limits as a function +of sky location for the 12.5-year data set’s most sensitive +frequency of 7.65 × 10−9 Hz. +This analysis also included the development of new +methods to accurately reflect the realistic distribution +of possible values of pulsar distances from updated mea- +surements. +The way we treat these values in search +pipelines has a significant impact on our ability to de- +tect the pulsar term of a CW signal, and these methods +will be critical as we proceed towards PTA sensitivities +that enable a CW detection. +Unlike previous data sets, the 12.5-year data set con- +tains a significant CRN. Therefore, for the first time, +we included the CRN in our Bayesian searches by fixing +the model parameters to those recovered in Arzouma- +nian et al. (2020a). This had a significant effect on the +results of many of our analyses, and proved critical to +avoid a false detection of a CW at 2.45 × 10−9 Hz. This +process also significantly impeded the reduction of our +upper limits limits between the 11-year and 12.5-year +NANOGrav searches at the most sensitive frequency of +7.65 × 10−9 Hz in most areas of the sky. +Despite these new necessities, we are able to place sig- +nificant astrophysical constraints on the local SMBHB +population. In our most sensitive sky location, we can +rule out the existance of any SMBHB with a mass +of at least 109M⊙ emitting at 7.65 × 10−9 Hz within +86.65 Mpc. +Furthermore, we demonstrate significant +improvements to chirp mass upper limits of SMBHB +candidates can be made through multi-messenger anal- +ysis techniques, and limit the chirp mass of 3C 66B to +(1.34±0.01)×109M⊙. With the inclusion of more data, +we will soon be able to rule out or confirm this source +and other binary candidates, as well as those that are +yet undiscovered. +7. ACKNOWLEDGEMENTS +Author contributions: An alphabetical-order author +list was used for this paper in recognition of the fact that +a large, decade timescale project such as NANOGrav is +necessarily the result of the work of many people. All +authors contributed to the activities of the NANOGrav +collaboration leading to the work presented here, and +reviewed the manuscript, text, and figures prior to the +paper’s submission. Additional specific contributions to +this paper are as follows. ZA, HB, PRB, HTC, MED, +PBD, TD, JAE, RDF, ECF, EF, NG-D, PAG, DCG, +MLJ, MTL, DRL, RSL, JL, MAM, CN, DJN, TTP, +NSP, SMR, KS, IHS, RS, JKS, RS and SJV developed +the 12.5-year data set through a combination of obser- +vations, arrival time calculations, data checks and re- +finements, and timing model development and analy- +sis; additional specific contributions to the data set are +summarized in Alam et al. (2021a). CAW coordinated +the writing of the paper and led the search. BB, ARK, +NSP, JSy, GW, and CAW performed analyses for the +project, including exploratory runs. JS and CAW devel- +oped methods to include the CRN in the search model. +AB, NG-D, JG, KG, SRT, SJV, and CAW proposed +for the necessary XSEDE resources to complete these +analyses. NSP and CAW performed the sky map simu- +lations. AC-C, LZK, CMFM, and CAW developed the +astrophysical interpretations. ADJ provided updates to +red noise empirical distributions. GEF, XS, and SJV +explored the frequentist analyses. SC, DJN, MAM, and +CAW updated the pulsar distance priors. SBS, CMFM, +and CAW wrote the manuscript and produced the fig- +ures. +We thank BB, SC, JMC, NJC, WF, KG, JSH, +DLK, LZK, MTL, TJWL, MAM, DJN, KDO, JDR, +SRT, and SJV for their thoughtful comments on the +manuscript. +Acknowledgements. +This work has been +carried out by the NANOGrav collaboration, which re- +ceives support from National Science Foundation (NSF) +Physics Frontiers Center award numbers 1430284 and +2020265. The Arecibo Observatory is a facility of the +NSF operated under cooperative agreement (No. AST- + +NANOGrav 12.5-year Continuous Wave Limits +17 +1744119) by the University of Central Florida (UCF) in +alliance with Universidad Ana G. M´endez (UAGM) and +Yang Enterprises (YEI), Inc. The Green Bank Obser- +vatory is a facility of the NSF operated under cooper- +ative agreement by Associated Universities, Inc. +The +National Radio Astronomy Observatory is a facility of +the NSF operated under cooperative agreement by Asso- +ciated Universities, Inc. SBS and CAW were supported +in this work by NSF award grant Nos. +1458952 and +1815664. CAW acknowledges support from West Vir- +ginia University through a STEM Completion Grant, +and acknowledges support from CIERA, the Adler Plan- +etarium, and the Brinson Foundation through a CIERA- +Adler postdoctoral fellowship. SBS is a CIFAR Azrieli +Global Scholar in the Gravity and the Extreme Uni- +verse program. MC and SRT acknowledge support from +NSF grant No. AST-2007993. SRT also acknowledges +support from an NSF CAREER Award PHY-2146016, +and a Vanderbilt University College of Arts & Science +Dean’s Faculty Fellowship. +CMFM was supported in +part by the National Science Foundation under Grants +NSF PHY-2020265, and AST-2106552. +The Flatiron +Institute is supported by the Simons Foundation. Part +of this research was carried out at the Jet Propulsion +Laboratory, California Institute of Technology, under a +contract with the National Aeronautics and Space Ad- +ministration. Portions of this work performed at NRL +were supported by Office of Naval Research 6.1 fund- +ing. The Flatiron Institute is supported by the Simons +Foundation. Pulsar research at UBC is supported by +an NSERC Discovery Grant and by the Canadian Insti- +tute for Advanced Research. JS and MV acknowledge +support from the JPL RTD program. KDO was sup- +ported in part by the National Science Foundation under +grant No. 2207267. ECF is supported by NASA under +award number 80GSFC21M0002. TD and MTL are sup- +ported by an NSF Astronomy and Astrophysics Grant +(AAG) award number 2009468. +LZK was supported +by a Cottrell Fellowships Award (No. 27985) from the +Research Corporation for Science Advancement made +possible by the National Science Foundation grant No. +CHE2125978. +MED acknowledges support from the +Naval Research Laboratory by NASA under contract +S-15633Y. We acknowledge the use of Thorny Flat at +WVU, which is funded in part by the National Science +Foundation Major Research Instrumentation Program +(MRI) award No. 1726534 and WVU. This work used +the Extreme Science and Engineering Discovery Envi- +ronment (XSEDE), which is supported by National Sci- +ence Foundation grant number ACI-1548562. +Specifi- +cally, it used the Bridges-2 system, which is supported +by NSF award number ACI-1928147, at the Pittsburgh +Supercomputing Center (PSC) (Towns et al. 2014). +Facilities: Arecibo, GBT +Software: +enterprise +(Ellis +et +al. +2019), +enterprise extensions +(Taylor +et +al. +2021), +PTMCMCSampler (Ellis & van Haasteren 2017), hasasia +(Hazboun et al. 2019a), libstempo (Vallisneri 2020), +tempo (Nice et al. 2015), tempo2 (Hobbs et al. 2006), +PINT (Luo et al. 2019), matplotlib (Hunter 2007), +astropy (Price-Whelan et al. 2018; Astropy Collabora- +tion et al. 2013), healpy (Zonca et al. 2019), HEALPix +(G´orski et al. 2005), nanohertz gws (Mingarelli 2017) +APPENDIX +A. PULSAR DISTANCE VALUES +REFERENCES +Abdo, A. A., Ajello, M., Allafort, A., et al. 2013, ApJS, +208, 17, doi: 10.1088/0067-0049/208/2/17 +Aggarwal, K., Arzoumanian, Z., Baker, P. T., et al. 2019, +ApJ, 880, 116, doi: 10.3847/1538-4357/ab2236 +—. 2020, ApJ, 889, 38, doi: 10.3847/1538-4357/ab6083 +Alam, M. F., Arzoumanian, Z., Baker, P. T., et al. 2021a, +ApJS, 252, 4, doi: 10.3847/1538-4365/abc6a0 +—. 2021b, ApJS, 252, 5, doi: 10.3847/1538-4365/abc6a1 +Antoniadis, J., Arzoumanian, Z., Babak, S., et al. 2022, +MNRAS, 510, 4873, doi: 10.1093/mnras/stab3418 +Arzoumanian, Z., Brazier, A., Burke-Spolaor, S., et al. +2014, ApJ, 794, 141, doi: 10.1088/0004-637X/794/2/141 +—. 2016, ApJ, 821, 13, doi: 10.3847/0004-637X/821/1/13 +Arzoumanian, Z., Baker, P. T., Brazier, A., et al. 2018, +ApJ, 859, 47, doi: 10.3847/1538-4357/aabd3b +Arzoumanian, Z., Baker, P. T., Blumer, H., et al. 2020a, +ApJL, 905, L34, doi: 10.3847/2041-8213/abd401 +Arzoumanian, Z., Baker, P. T., Brazier, A., et al. 2020b, +ApJ, 900, 102, doi: 10.3847/1538-4357/ababa1 + +18 +The NANOGrav Collaboration +Table 2. Compiled pulsar distance values and uncertainties for each pulsar used in the 12.5-year CW analysis, along with +the parallax (PX) or DM prior identifier. Values compiled using measurements from Ding et al. (2020); Jennings et al. (2018); +Deller et al. (2019); Guillemot et al. (2016); Stovall et al. (2014); Abdo et al. (2013); Freire et al. (2012); Verbiest et al. (2009); +Lazaridis et al. (2009); Chatterjee et al. (2009); Hotan et al. (2006); Lommen et al. (2006); Jacoby et al. (2005); Splaver et al. +(2005); L¨ohmer et al. (2004); Toscano et al. (1999); Camilo et al. (1994) and Alam et al. (2021a). +Pulsar +Prior +Distance (kpc) +Error (kpc) +Pulsar +Prior +Distance (kpc) +Error (kpc) +B1855+09 +PX +1.4 +0.24 +B1937+21 +PX +3.55 +0.64 +B1953+29 +DM +4.64 +0.93 +J0023+0923 +PX +1.82 +0.41 +J0030+0451 +PX +0.32 +0.01 +J0340+4130 +DM +1.71 +0.34 +J0613-0200 +PX +1.06 +0.13 +J0636+5128 +PX +0.73 +0.12 +J0645+5158 +PX +1.11 +0.19 +J0740+6620 +DM +0.68 +0.14 +J0931-1902 +DM +1.88 +0.38 +J1012+5307 +PX +0.83 +0.05 +J1024-0719 +PX +1.08 +0.14 +J1125+7819 +DM +0.65 +0.13 +J1453+1902 +DM +1.15 +0.23 +J1455-3330 +PX +1.01 +0.22 +J1600-3053 +PX +1.96 +0.31 +J1614-2230 +PX +0.69 +0.03 +J1640+2224 +DM +1.14 +0.23 +J1643-1224 +PX +0.45 +0.08 +J1713+0747 +PX +1.11 +0.02 +J1738+0333 +PX +1.47 +0.11 +J1741+1351 +PX +2.36 +0.62 +J1744-1134 +PX +0.42 +0.01 +J1747-4036 +DM +3.5 +0.7 +J1832-0836 +PX +2.1 +0.57 +J1853+1303 +DM +2.08 +0.42 +J1903+0327 +DM +6.49 +1.3 +J1909-3744 +PX +1.17 +0.02 +J1910+1256 +DM +2.35 +0.47 +J1911+1347 +DM +2.08 +0.42 +J1918-0642 +PX +1.17 +0.15 +J1923+2515 +DM +1.63 +0.33 +J1944+0907 +DM +1.8 +0.36 +J2010-1323 +PX +2.45 +0.71 +J2017+0603 +DM +1.57 +0.31 +J2033+1734 +DM +1.99 +0.4 +J2043+1711 +PX +1.39 +0.12 +J2145-0750 +PX +0.64 +0.02 +J2214+3000 +DM +1.54 +0.31 +J2229+2643 +DM +1.43 +0.29 +J2234+0611 +PX +1.19 +0.15 +J2234+0944 +DM +1.0 +0.2 +J2302+4442 +DM +1.18 +0.24 +J2317+1439 +PX +1.62 +0.21 +– +– +– +– +Arzoumanian, Z., Baker, P. T., Blumer, H., et al. 2021a, +PhRvL, 127, 251302, +doi: 10.1103/PhysRevLett.127.251302 +Arzoumanian, Z., Baker, P. T., Brazier, A., et al. 2021b, +ApJ, 914, 121, doi: 10.3847/1538-4357/abfcd3 +Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., +et al. 2013, A&A, 558, A33, +doi: 10.1051/0004-6361/201322068 +Babak, S., Petiteau, A., Sesana, A., et al. 2016, MNRAS, +455, 1665, doi: 10.1093/mnras/stv2092 +B´ecsy, B., & Cornish, N. J. 2020, Classical and Quantum +Gravity, 37, 135011, doi: 10.1088/1361-6382/ab8bbd +B´ecsy, B., Cornish, N. J., & Digman, M. C. 2022a, PhRvD, +105, 122003, doi: 10.1103/PhysRevD.105.122003 +B´ecsy, B., Cornish, N. J., & Kelley, L. Z. 2022b, arXiv +e-prints, arXiv:2207.01607. +https://arxiv.org/abs/2207.01607 +Begelman, M. C., Blandford, R. D., & Rees, M. J. 1980, +Nature, 287, 307, doi: 10.1038/287307a0 +Benetti, M., Graef, L. L., & Vagnozzi, S. 2022, PhRvD, +105, 043520, doi: 10.1103/PhysRevD.105.043520 +Burke-Spolaor, S., Taylor, S. R., Charisi, M., et al. 2019, +A&A Rv, 27, 5, doi: 10.1007/s00159-019-0115-7 +Camilo, F., Foster, R. S., & Wolszczan, A. 1994, ApJL, +437, L39, doi: 10.1086/187677 +Casey-Clyde, J. A., Mingarelli, C., & Trump, J. in prep +Chalumeau, A., Babak, S., Petiteau, A., et al. 2022, +MNRAS, 509, 5538, doi: 10.1093/mnras/stab3283 +Charisi, M., Bartos, I., Haiman, Z., et al. 2016, MNRAS, +463, 2145, doi: 10.1093/mnras/stw1838 +Charisi, M., Taylor, S. R., Runnoe, J., Bogdanovic, T., & +Trump, J. R. 2022, MNRAS, 510, 5929, +doi: 10.1093/mnras/stab3713 +Chatterjee, S., Brisken, W. F., Vlemmings, W. H. T., et al. +2009, ApJ, 698, 250, doi: 10.1088/0004-637X/698/1/250 +Chatziioannou, K., Cornish, N., Klein, A., & Yunes, N. +2014, PhRvD, 89, 104023, +doi: 10.1103/PhysRevD.89.104023 +Chen, S., Sesana, A., & Conselice, C. J. 2019, Monthly +Notices of the Royal Astronomical Society, 488, 401, +doi: 10.1093/mnras/stz1722 + +NANOGrav 12.5-year Continuous Wave Limits +19 +Chen, S., Caballero, R. N., Guo, Y. J., et al. 2021, +MNRAS, 508, 4970, doi: 10.1093/mnras/stab2833 +Cordes, J. M., & Lazio, T. J. W. 2002, arXiv e-prints, +astro. https://arxiv.org/abs/astro-ph/0207156 +Deller, A. T., Goss, W. M., Brisken, W. F., et al. 2019, +ApJ, 875, 100, doi: 10.3847/1538-4357/ab11c7 +Demorest, P. B., Ferdman, R. D., Gonzalez, M. E., et al. +2013, ApJ, 762, 94, doi: 10.1088/0004-637X/762/2/94 +Desvignes, G., Caballero, R. N., Lentati, L., et al. 2016, +MNRAS, 458, 3341, doi: 10.1093/mnras/stw483 +Detweiler, S. 1979, ApJ, 234, 1100, doi: 10.1086/157593 +Dickey, J. M. 1971, The Annals of Mathematical Statistics, +42, 204. http://www.jstor.org/stable/2958475 +Ding, H., Deller, A. T., Freire, P., et al. 2020, ApJ, 896, 85, +doi: 10.3847/1538-4357/ab8f27 +Ellis, J., & van Haasteren, R. 2017, +doi: 10.5281/zenodo.1037579 +Ellis, J. A., Siemens, X., & Creighton, J. D. E. 2012, ApJ, +756, 175, doi: 10.1088/0004-637X/756/2/175 +Ellis, J. A., Vallisneri, M., Taylor, S. R., & Baker, P. T. +2019, ENTERPRISE: Enhanced Numerical Toolbox +Enabling a Robust PulsaR Inference SuitE. +http://ascl.net/1912.015 +Falxa, M., Babak, S., & Chalumeau, A. in prep +Folkner, W. M., & Park, R. S. 2018, Planetary ephemeris +DE438 for Juno, Tech. Rep. IOM 392R-18-004, Jet +Propulsion Laboratory, Pasadena, CA +Foster, R. S., & Backer, D. C. 1990, ApJ, 361, 300, +doi: 10.1086/169195 +Freire, P. C. C., Wex, N., Esposito-Far`ese, G., et al. 2012, +MNRAS, 423, 3328, +doi: 10.1111/j.1365-2966.2012.21253.x +Goncharov, B., Shannon, R. M., Reardon, D. J., et al. 2021, +ApJL, 917, L19, doi: 10.3847/2041-8213/ac17f4 +G´orski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, +622, 759, doi: 10.1086/427976 +Graham, M. J., Djorgovski, S. G., Stern, D., et al. 2015, +MNRAS, 453, 1562, doi: 10.1093/mnras/stv1726 +Guillemot, L., Smith, D. A., Laffon, H., et al. 2016, A&A, +587, A109, doi: 10.1051/0004-6361/201527847 +Hazboun, J., Romano, J., & Smith, T. 2019a, The Journal +of Open Source Software, 4, 1775, +doi: 10.21105/joss.01775 +Hazboun, J. S., Romano, J. D., & Smith, T. L. 2019b, +PhRvD, 100, 104028, doi: 10.1103/PhysRevD.100.104028 +Hobbs, G. 2013a, Classical and Quantum Gravity, 30, +224007, doi: 10.1088/0264-9381/30/22/224007 +—. 2013b, Classical and Quantum Gravity, 30, 224007, +doi: 10.1088/0264-9381/30/22/224007 +Hobbs, G. B., Edwards, R. T., & Manchester, R. N. 2006, +MNRAS, 369, 655, doi: 10.1111/j.1365-2966.2006.10302.x +Hotan, A. W., Bailes, M., & Ord, S. M. 2006, MNRAS, +369, 1502, doi: 10.1111/j.1365-2966.2006.10394.x +Hu, B. X., D’Orazio, D. J., Haiman, Z., et al. 2020, +MNRAS, 495, 4061, doi: 10.1093/mnras/staa1312 +Hunter, J. D. 2007, Computing in Science & Engineering, 9, +90, doi: 10.1109/MCSE.2007.55 +Iguchi, S., Okuda, T., & Sudou, H. 2010, ApJL, 724, L166, +doi: 10.1088/2041-8205/724/2/L166 +Islo, K., Simon, J., Burke-Spolaor, S., & Siemens, X. 2019, +arXiv e-prints, arXiv:1906.11936. +https://arxiv.org/abs/1906.11936 +Jacoby, B. A., Hotan, A., Bailes, M., Ord, S., & Kulkarni, +S. R. 2005, ApJL, 629, L113, doi: 10.1086/449311 +Jenet, F. A., Lommen, A., Larson, S. L., & Wen, L. 2004, +ApJ, 606, 799, doi: 10.1086/383020 +Jennings, R. J., Kaplan, D. L., Chatterjee, S., Cordes, +J. M., & Deller, A. T. 2018, ApJ, 864, 26, +doi: 10.3847/1538-4357/aad084 +Jones, M. L., McLaughlin, M. A., Lam, M. T., et al. 2017, +ApJ, 841, 125, doi: 10.3847/1538-4357/aa73df +Kass, R. E., & Raftery, A. E. 1995, Journal of the +American Statistical Association, 90, 773, +doi: 10.1080/01621459.1995.10476572 +Kelley, L. Z., Blecha, L., Hernquist, L., Sesana, A., & +Taylor, S. R. 2018, MNRAS, 477, 964, +doi: 10.1093/mnras/sty689 +Kerr, M., Reardon, D. J., Hobbs, G., et al. 2020, PASA, 37, +e020, doi: 10.1017/pasa.2020.11 +Lam, M. T., Cordes, J. M., Chatterjee, S., et al. 2016, ApJ, +821, 66, doi: 10.3847/0004-637X/821/1/66 +—. 2017, ApJ, 834, 35, doi: 10.3847/1538-4357/834/1/35 +Lazaridis, K., Wex, N., Jessner, A., et al. 2009, MNRAS, +400, 805, doi: 10.1111/j.1365-2966.2009.15481.x +Lehto, H. J., & Valtonen, M. J. 1996, ApJ, 460, 207, +doi: 10.1086/176962 +Lentati, L., Taylor, S. R., Mingarelli, C. M. F., et al. 2015, +MNRAS, 453, 2576, doi: 10.1093/mnras/stv1538 +Liu, T., & Vigeland, S. J. 2021, ApJ, 921, 178, +doi: 10.3847/1538-4357/ac1da9 +Liu, T., Gezari, S., Ayers, M., et al. 2019, ApJ, 884, 36, +doi: 10.3847/1538-4357/ab40cb +L¨ohmer, O., Kramer, M., Driebe, T., et al. 2004, A&A, 426, +631, doi: 10.1051/0004-6361:20041031 +Lommen, A. N., & Backer, D. C. 2001, ApJ, 562, 297, +doi: 10.1086/323491 +Lommen, A. N., Kipphorn, R. A., Nice, D. J., et al. 2006, +ApJ, 642, 1012, doi: 10.1086/501067 + +20 +The NANOGrav Collaboration +Luo, J., Ransom, S., Demorest, P., et al. 2019, PINT: +High-precision pulsar timing analysis package, +Astrophysics Source Code Library, record ascl:1902.007. +http://ascl.net/1902.007 +Lutz, T. E., & Kelker, D. H. 1973, PASP, 85, 573, +doi: 10.1086/129506 +McLaughlin, M. A. 2013, Classical and Quantum Gravity, +30, 224008, doi: 10.1088/0264-9381/30/22/224008 +Middleton, H., Sesana, A., Chen, S., et al. 2021, MNRAS, +502, L99, doi: 10.1093/mnrasl/slab008 +Mingarelli, C. 2017, ChiaraMingarelli/nanohertz GWs: +First release!, v1.0, Zenodo, doi: 10.5281/zenodo.838712 +Mingarelli, C. M. F., Lazio, T. J. W., Sesana, A., et al. +2017, Nature Astronomy, 1, 886, +doi: 10.1038/s41550-017-0299-6 +Mundy, C. J., Conselice, C. J., Duncan, K. J., et al. 2017, +MNRAS, 470, 3507, doi: 10.1093/mnras/stx1238 +Nice, D., Demorest, P., Stairs, I., et al. 2015, Tempo: +Pulsar timing data analysis, Astrophysics Source Code +Library, record ascl:1509.002. http://ascl.net/1509.002 +Perera, B. B. P., DeCesar, M. E., Demorest, P. B., et al. +2019, MNRAS, 490, 4666, doi: 10.1093/mnras/stz2857 +Phinney, E. S. 2001, arXiv e-prints, astro. +https://arxiv.org/abs/astro-ph/0108028 +Pol, N. S., Taylor, S. R., Kelley, L. Z., et al. 2021, ApJL, +911, L34, doi: 10.3847/2041-8213/abf2c9 +Price-Whelan, A. M., Sip˝ocz, B. M., G¨unther, H. M., et al. +2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f +Rosado, P. A., & Sesana, A. 2014, MNRAS, 439, 3986, +doi: 10.1093/mnras/stu254 +Rosado, P. A., Sesana, A., & Gair, J. 2015, MNRAS, 451, +2417, doi: 10.1093/mnras/stv1098 +Sesana, A. 2013, MNRAS, 433, L1, +doi: 10.1093/mnrasl/slt034 +Sesana, A., Haardt, F., Madau, P., & Volonteri, M. 2004, +ApJ, 611, 623, doi: 10.1086/422185 +Sesana, A., Vecchio, A., & Colacino, C. N. 2008, MNRAS, +390, 192, doi: 10.1111/j.1365-2966.2008.13682.x +Shannon, R. M., & Cordes, J. M. 2010, ApJ, 725, 1607, +doi: 10.1088/0004-637X/725/2/1607 +Shannon, R. M., Ravi, V., Coles, W. A., et al. 2013, +Science, 342, 334, doi: 10.1126/science.1238012 +Shannon, R. M., Ravi, V., Lentati, L. T., et al. 2015, +Science, 349, 1522, doi: 10.1126/science.aab1910 +Simon, J., & Hazboun, J. in prep +Simon, J., Polin, A., Lommen, A., et al. 2014, ApJ, 784, 60, +doi: 10.1088/0004-637X/784/1/60 +Splaver, E. M., Nice, D. J., Stairs, I. H., Lommen, A. N., & +Backer, D. C. 2005, ApJ, 620, 405, doi: 10.1086/426804 +Stovall, K., Lynch, R. S., Ransom, S. M., et al. 2014, ApJ, +791, 67, doi: 10.1088/0004-637X/791/1/67 +Sudou, H., Iguchi, S., Murata, Y., & Taniguchi, Y. 2003, +Science, 300, 1263, doi: 10.1126/science.1082817 +Taylor, S. R., Baker, P. T., Hazboun, J. S., Simon, J., & +Vigeland, S. J. 2021, enterprise extensions. +https://github.com/nanograv/enterprise extensions +Taylor, S. R., Huerta, E. A., Gair, J. R., & McWilliams, +S. T. 2016, ApJ, 817, 70, +doi: 10.3847/0004-637X/817/1/70 +Toscano, M., Britton, M. C., Manchester, R. N., et al. 1999, +ApJL, 523, L171, doi: 10.1086/312276 +Towns, J., Cockerill, T., Dahan, M., et al. 2014, Computing +in Science & Engineering, 16, 62, +doi: 10.1109/MCSE.2014.80 +Vallisneri, M. 2020, libstempo: Python wrapper for +Tempo2, Astrophysics Source Code Library, record +ascl:2002.017. http://ascl.net/2002.017 +Vallisneri, M., Taylor, S. R., Simon, J., et al. 2020, ApJ, +893, 112, doi: 10.3847/1538-4357/ab7b67 +van Haasteren, R., Levin, Y., Janssen, G. H., et al. 2011, +MNRAS, 414, 3117, +doi: 10.1111/j.1365-2966.2011.18613.x +Verbiest, J. P. W., Weisberg, J. M., Chael, A. A., Lee, +K. J., & Lorimer, D. R. 2012, ApJ, 755, 39, +doi: 10.1088/0004-637X/755/1/39 +Verbiest, J. P. W., Bailes, M., Coles, W. A., et al. 2009, +MNRAS, 400, 951, doi: 10.1111/j.1365-2966.2009.15508.x +Verbiest, J. P. W., Lentati, L., Hobbs, G., et al. 2016a, +MNRAS, 458, 1267, doi: 10.1093/mnras/stw347 +—. 2016b, MNRAS, 458, 1267, doi: 10.1093/mnras/stw347 +Vigeland, S. J., & Vallisneri, M. 2014, MNRAS, 440, 1446, +doi: 10.1093/mnras/stu312 +Witt, C. A., Charisi, M., Taylor, S. R., & Burke-Spolaor, S. +2022, ApJ, 936, 89, doi: 10.3847/1538-4357/ac8356 +Xin, C., Mingarelli, C. M. F., & Hazboun, J. S. 2021, ApJ, +915, 97, doi: 10.3847/1538-4357/ac01c5 +Xue, X., Bian, L., Shu, J., et al. 2021, PhRvL, 127, 251303, +doi: 10.1103/PhysRevLett.127.251303 +Yao, J. M., Manchester, R. N., & Wang, N. 2017, ApJ, 835, +29, doi: 10.3847/1538-4357/835/1/29 +Yardley, D. R. B., Hobbs, G. B., Jenet, F. A., et al. 2010, +MNRAS, 407, 669, doi: 10.1111/j.1365-2966.2010.16949.x +Zhu, X.-J., Hobbs, G., Wen, L., et al. 2014, MNRAS, 444, +3709, doi: 10.1093/mnras/stu1717 +Zonca, A., Singer, L., Lenz, D., et al. 2019, Journal of Open +Source Software, 4, 1298, doi: 10.21105/joss.01298 + diff --git a/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/load_file.txt b/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae97266f3bb378e9960099e2e102a395ef7ffb6d --- /dev/null +++ b/SNE2T4oBgHgl3EQfCAY8/content/tmp_files/load_file.txt @@ -0,0 +1,2057 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf,len=2056 +page_content='Draft version January 11, 2023 Typeset using LATEX twocolumn style in AASTeX63 The NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Data Set: Bayesian Limits on Gravitational Waves from Individual Supermassive Black Hole Binaries Zaven Arzoumanian,1 Paul T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Baker ,2 Laura Blecha,3 Harsha Blumer ,4, 5 Adam Brazier,6 Paul R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Brook ,4, 5 Sarah Burke-Spolaor ,4, 5 Bence B´ecsy ,7 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Andrew Casey-Clyde ,8 Maria Charisi ,9 Shami Chatterjee ,6 Siyuan Chen ,10 James M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Cordes ,6 Neil J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Cornish ,11 Fronefield Crawford ,12 H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Thankful Cromartie ,13 Megan E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' DeCesar ,14 Paul B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Demorest ,15 Timothy Dolch ,16, 17 Brendan Drachler,18, 19 Justin A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ellis,20 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ferrara ,21, 22, 23 William Fiore ,4, 5 Emmanuel Fonseca ,4, 5 Gabriel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Freedman ,24 Nathan Garver-Daniels ,4, 5 Peter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Gentile ,4, 5 Joseph Glaser ,4, 5 Deborah C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Good ,25 Kayhan G¨ultekin ,26 Jeffrey S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hazboun ,7 Ross J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jennings ,6 Aaron D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Johnson ,24, 27 Megan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jones ,24 Andrew R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Kaiser ,4, 5 David L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Kaplan ,24 Luke Zoltan Kelley ,28, 29 Joey Shapiro Key ,30 Nima Laal ,7 Michael T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lam ,18, 19 William G Lamb ,9 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Joseph W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lazio,31, 27 Natalia Lewandowska ,32 Tingting Liu ,24 Duncan R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lorimer ,4, 5 Jing Luo,33, ∗ Ryan S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lynch ,34 Dustin R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Madison ,4, 5 Alexander McEwen ,24 Maura A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' McLaughlin ,4, 5 Chiara M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Mingarelli ,35, 8 Cherry Ng ,36 David J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Nice ,37 Stella Koch Ocker ,6 Ken D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Olum ,38 Timothy T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pennucci ,39 Nihan S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pol ,9 Scott M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ransom ,40 Paul S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ray ,41 Joseph D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Romano,42 Brent J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Shapiro-Albert ,43, 4, 5 Xavier Siemens ,7, 24 Joseph Simon ,31, 27 Magdalena Siwek ,44 Ren´ee Spiewak ,45 Ingrid H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Stairs ,25 Daniel R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Stinebring ,46 Kevin Stovall ,15 Joseph K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Swiggum ,37, † Jessica Sydnor ,4, 5 Stephen R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Taylor ,9 Jacob E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Turner ,4, 5 Michele Vallisneri ,31, 27 Sarah J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Vigeland ,24 Haley M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Wahl ,4, 5 Gregory Walsh ,4, 5 Caitlin A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Witt§ ,29, 47, 4, 5 Olivia Young ,18, 19 The NANOGrav Collaboration 1X-Ray Astrophysics Laboratory, NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA 2Department of Physics and Astronomy, Widener University, One University Place, Chester, PA 19013, USA 3Department of Physics, University of Florida, 2001 Museum Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Gainesville, FL 32611, USA 4Department of Physics and Astronomy, West Virginia University, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Box 6315,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Morgantown,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' WV 26506,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 5Center for Gravitational Waves and Cosmology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' West Virginia University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Chestnut Ridge Research Building,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Morgantown,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' WV 26505,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 6Cornell Center for Astrophysics and Planetary Science and Department of Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Cornell University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ithaca,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NY 14853,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 7Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Oregon State University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Corvallis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' OR 97331,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 8Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University of Connecticut,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 196 Auditorium Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' U-3046,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Storrs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CT 06269-3046,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 9Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Vanderbilt University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2301 Vanderbilt Place,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Nashville,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' TN 37235,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 10Kavli Institute for Astronomy and Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Peking University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Beijing,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 100871 China 11Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Montana State University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Bozeman,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MT 59717,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 12Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Franklin & Marshall College,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Box 3003, Lancaster, PA 17604, USA 13University of Virginia, Department of Astronomy, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Box 400325, Charlottesville, VA 22904, USA 14George Mason University, Fairfax, VA 22030, resident at the Naval Research Laboratory, Washington, DC 20375, USA 15National Radio Astronomy Observatory, 1003 Lopezville Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Socorro, NM 87801, USA 16Department of Physics, Hillsdale College, 33 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' College Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hillsdale,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MI 49242,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 17Eureka Scientific,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2452 Delmer Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Suite 100,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Oakland,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CA 94602-3017,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 18School of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rochester Institute of Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rochester,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NY 14623,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 19Laboratory for Multiwavelength Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rochester Institute of Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rochester,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NY 14623,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 20Infinia ML,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 202 Rigsbee Avenue,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Durham NC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 27701 21Department of Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University of Maryland,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' College Park,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MD,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 20742,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 22Center for Exploration and Space Studies (CRESST),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NASA/GSFC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Greenbelt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MD 20771,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 23NASA Goddard Space Flight Center,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Greenbelt,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MD 20771,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA Corresponding author: Caitlin A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Witt§ caitlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='witt@nanograv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='03608v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='GA] 9 Jan 2023 ID2 The NANOGrav Collaboration 24Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin-Milwaukee, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Box 413, Milwaukee, WI 53201, USA 25Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada 26University of Michigan, Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' of Astronomy, 1085 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University Ave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ann Arbor,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MI,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 48104,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 27Theoretical AstroPhysics Including Relativity (TAPIR),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MC 350-17,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' California Institute of Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pasadena,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' California 91125,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 28Department of Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University of California at Berkeley,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Berkeley,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CA 94720,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 29Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Northwestern University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Evanston,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' IL 60208 30University of Washington Bothell,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 18115 Campus Way NE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Bothell,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' WA 98011,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 31Jet Propulsion Laboratory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' California Institute of Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4800 Oak Grove Drive,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pasadena,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CA 91109,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 32Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' State University of New York at Oswego,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Oswego,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NY,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 13126,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 33Department of Astronomy & Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University of Toronto,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 50 Saint George Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Toronto,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ON M5S 3H4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Canada 34Green Bank Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Box 2, Green Bank, WV 24944, USA 35Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, New York, New York, 10010, USA 36Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' George St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Toronto, ON M5S 3H4, Canada 37Department of Physics, Lafayette College, Easton, PA 18042, USA 38Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155, USA 39Institute of Physics, E¨otv¨os Lor´and University, P´azm´any P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1/A,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1117 Budapest,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hungary 40National Radio Astronomy Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 520 Edgemont Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Charlottesville,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' VA 22903,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 41Space Science Division,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Naval Research Laboratory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Washington,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' DC 20375-5352,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 42Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Texas Tech University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lubbock,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' TX 79409-1051,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 43Giant Army,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 915A 17th Ave,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Seattle WA 98122 44Center for Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Harvard University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Cambridge,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MA 02138,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 45Jodrell Bank Centre for Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' University of Manchester,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Manchester M13 9PL,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' UK 46Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Oberlin College,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Oberlin,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' OH 44074,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' USA 47Adler Planetarium,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1300 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' DuSable Lake Shore Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Chicago, IL 60605, USA ABSTRACT Pulsar timing array collaborations, such as the North American Nanohertz Observatory for Gravita- tional Waves (NANOGrav), are seeking nanohertz gravitational waves emitted by supermassive black hole binaries formed in the aftermath of galaxy mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We have searched for continuous waves from individual circular supermassive black hole binaries using NANOGrav’s recent 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We created new methods to accurately model the uncertainties on pulsar distances in our analysis, and we implemented new techniques to account for a common red noise process in pulsar timing array data sets while searching for deterministic gravitational wave signals, including continuous waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As we found no evidence for continuous waves in our data, we placed 95% upper limits on the strain ampli- tude of continuous waves emitted by these sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At our most sensitive frequency of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 nanohertz, we placed a sky-averaged limit of h0 < (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='82 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='35) × 10−15, and h0 < (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15) × 10−15 in our most sensitive sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Finally, we placed a multi-messenger limit of M < (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='41 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02) × 109M⊙ on the chirp mass of the supermassive black hole binary candidate 3C 66B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Keywords: Gravitational waves – Methods: data analysis – Pulsars: general 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' INTRODUCTION Supermassive black hole binaries (SMBHBs) are ex- pected to form in the aftermath of galaxy mergers, when the two constituent supermassive black holes eventually become gravitationally bound (Begelman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' If they are able to reach an advanced stage of evolu- tion, with sub-parsec orbital separations, these binaries ∗ Author is deceased † NANOGrav Physics Frontiers Center Postdoctoral Fellow are predicted to be among the brightest sources of low- frequency gravitational waves (GWs) in the universe, emitting at frequencies of ∼ 10−9 − 10−7 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The GWs emitted by discrete SMBHBs are known as continuous waves (CWs) due to their minimal frequency evolution, while the dominant source of nanohertz GWs is expected to be the stochastic background of GWs (GWB) that has contributions from the entire cosmic population of SMBHBs and potentially other sources (Sesana et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Burke-Spolaor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 3 By carefully monitoring the radio pulses from stable millisecond pulsars (MSPs) over many years, pulsar tim- ing arrays (PTAs) should be able to detect correlated fluctuations in the pulse times of arrival due to the in- fluence of low-frequency GWs (Detweiler 1979;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Foster & Backer 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' There are multiple PTA collabora- tions currently operating;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' among them, the North Amer- ican Nanohertz Observatory for Gravitational Waves (NANOGrav;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' McLaughlin 2013), the Parkes Pulsar Timing Array (PPTA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hobbs 2013a,b), and the Eu- ropean Pulsar Timing Array (EPTA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Desvignes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016) have each produced multiple pulsar timing data sets with which to search for GWs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These groups, along with other pulsar timing projects, combine efforts as a consortium known as the International Pulsar Timing Array (IPTA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Verbiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These PTA data sets have enabled numerous searches for GWs from SMBHBs, as well as primordial GWs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Benetti et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022), cosmic strings (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018), and cosmological phase transitions (Arzou- manian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Xue et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Modeling has suggested that the GWB signal from SMBHBs will be detected first (Rosado et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While PTAs have not yet detected a GWB, they have placed steadily improv- ing limits on such a signal (van Haasteren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Demorest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Shannon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lentati et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Shannon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Verbiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ar- zoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, 2018) until around 2015, when published limits began to stabilize at a characteristic strain value of a few times 10−15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In the NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set (Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021a), PPTA second data release (Kerr et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020), EPTA data release 2 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021), and IPTA data release 2 (Perera et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), not only does the upper limit no longer decrease, but a common red noise (CRN) process with charac- teristics similar to those predicted for a SMBHB-origin GWB was detected to high significance, albeit without evidence for the specific spatial correlation assumed for the GWB (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Goncharov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Antoniadis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Falxa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While this common red-noise process is heartening for future GWB searches, it has sparked new challenges for CW searches, as the background takes the form of a noise process, which (like any noise process under- lying a signal) will work to disrupt the sensitivity of CW searches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Over the past decades, all-sky and all- frequency CW searches have improved their sensitivity by several orders of magnitude in GW strain (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Yard- ley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Babak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), allowing the sensitivity horizon of PTAs to expand by several orders of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This has allowed the PTA horizon to in- clude increasing numbers of specific systems of interest (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lommen & Backer 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jenet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Aggar- wal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Charisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' PTAs are likely to reach the sensitivities required to detect a CW soon af- ter the GWB is detected (Rosado et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Mingarelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Kelley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B´ecsy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022b), and we are working to revise and improve CW search method- ologies as CW upper limits decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In this paper, we present the results of an all-sky search for CWs from individual circular SMBHBs in the NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This work is an exten- sion of the searches performed in previous NANOGrav datasets (presented in Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014 and Ag- garwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019 for the 5- and 11-year data sets, re- spectively), and uses analogous techniques to the search for CWs in the IPTA data release 2 (Falxa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Our new search benefited from the use of the more sen- sitive 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Most critically, however, in this work we needed to account for the existence of an emerging common-noise signal in this data set, and un- derstand the impact that this signal may have on CW sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In section 2, we present an overview of the data used for our analysis, details of new pulsar distance modeling methods created for CW searches, and a description of the GW signals and analysis methods used throughout this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In section 3, we present the results of our GW searches, and in section 4, interpret their broader astrophysical context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For the busy reader, our main results can be summarized as follows: For accurate low-frequency CW searches, the CRN that has been seen in GWB searches must be ac- counted for in our signal modeling;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' otherwise, our detection metrics may report a false positive re- sult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Once the CRN was taken into account, we found that no CWs were detected in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' With this knowledge, we placed stringent lim- its on the CW amplitude as a function of GW frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For the most sensitive frequency of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz, we reach strain 95% upper lim- its of (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='82 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='35) × 10−15, and we also placed limits on the CW amplitude at this frequency as a function of sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While our all-sky sensitivity has improved with each subsequent NANOGrav data set, we found herein that for a portion of the sky, the upper limit at the most sensitive frequency of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65×10−9 Hz is 4 The NANOGrav Collaboration comparable to or worse than in previous data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Through extensive simulations, we linked this ef- fect to the newly-detectable CRN process in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We used these limits to make inferences about the local population of SMBHBs, and limited the dis- tance to an SMBHB emitting at 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65×10−9 Hz to be greater than 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 Mpc for a 109M⊙ binary in the most sensitive sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We used multi-messenger techniques to update limits on the chirp mass of the SMBHB candidate 3C 66B to be less than (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='41 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02) × 109M⊙ and placed new limits on the chirp mass of SMBHB candidate HS 1630+2355 to be less than (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='28 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='03) × 1010M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In section 5, we discuss the implications of these results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In section 6, we summarize our conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' METHODS 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Data Set We analyzed the NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set, orig- inally published as Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021a,b), which consists of times-of-arrival (TOAs) and timing models from 47 pulsars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Two versions of the data set were created from the original observations, taken between 2004 and 2017, using independent analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here, we make use of the narrowband version of the data set (Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This adds 2 pulsars and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 years of observations over the previous 11-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For GW analyses, we re- quire the pulsars to have a timing baseline of at least 3 years;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' therefore, we use only 45 of the 47 pulsars in- cluded in the full data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, the 11-year data set included only 34 pulsars that could be used in GW anal- yses, so this addition, which includes a factor of ∼ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 increase in the number of pulse TOAs, represents a sig- nificant addition of data, increasing our sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' It is important to note that the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set is not merely an addition of TOAs to previous releases, but a full re-analysis with an updated pipeline, described in detail in Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Thus, our search also ben- efited from improved timing precision for pulsars shared with previous data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Signal Model As in previous NANOGrav searches for continuous gravitational waves, we will describe the effect of an individual SMBHB on a pulsar’s TOAs and its timing model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A starting point is the residuals, δt, obtained after subtracting a basic timing model (which excludes noise and GW parameters) from the measured arrival times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While the methods remain nearly identical to previous iterations, slight alterations have been made to improve consistency with other work in the field, to reflect more recent data, and to include the CRN in the CW search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As such, we will lay out the methods with particular focus on any instances that have changed since NANOGrav’s most recent CW search (Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Note that throughout this paper, we use units where G = c = 1, cosmology calculations assume H0 = 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='32, and the GW derivations assume General Relativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The pulsar residuals can be separated into multiple components as δt = Mϵ + nwhite + nred + s, (1) where M is the design matrix, which describes the lin- earized timing model, and ϵ is a vector of the timing model parameter offsets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This term allows the timing model parameters of each pulsar to be adjusted in ac- cordance with the presence of any additional signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The variables nwhite and nred refer to vectors describing the pulsar white and red noise, respectively, and s is a vector of GW-induced signal present in the residuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CW Signal For a GW source located at right ascension α and declination δ, we define the polar angle θ = π/2 − δ and azimuthal angle φ = α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The strain of GWs emit- ted from such a source can be written in terms of two polarizations as hab(t, ˆΩ) = e+ ab(ˆΩ)h+(t, ˆΩ) + e× ab(ˆΩ)h×(t, ˆΩ), (2) where ˆΩ is a unit vector pointing from the the GW source to the Earth (along the direction of propagation), h+,× are the polarization amplitudes, and e+,× ab are the polarization tensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These can be written in the solar system barycenter frame as e+ ab = ˆpaˆpb − ˆqaˆqb e× ab = ˆpaˆqb + ˆqaˆpb, (3) and are constructed from basis vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ˆn =(sin θ cos φ, sin θ sin φ, cos θ) = −ˆΩ ˆp =(cos ψ cos θ cos φ − sin ψ sin φ, cos ψ cos θ sin φ + sin ψ cos φ, − cos ψ sin θ) ˆq =(sin ψ cos θ cos φ + cos ψ sin φ, sin ψ cos θ sin φ − cos ψ cos φ, − sin ψ sin θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (4) Note that this basis is different than that used in Ag- garwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019) to maintain better consistency with NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 5 previous references and the standards used by other GW detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Differences can be reduced to a rotation of the frame by an angle equivalent to the GW polarization an- gle ψ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These polarization tensors are used to construct the antenna pattern function F +,×(ˆΩ), which describes the response of the pulsar (at unit vector ˆu) to the GW source, as in Taylor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2016), where F A(ˆΩ) ≡ 1 2 ˆuaˆub 1 + ˆΩ · ˆu eA ab(ˆΩ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (5) Now, we can write the signal s induced by the GW as seen in the pulsar’s residuals as s(t, ˆΩ) = F +(ˆΩ)∆s+(t) + F ×(ˆΩ)∆s×(t), (6) where ∆s+,× is the difference between the signal induced at the Earth (the “Earth term”) and at the pulsar (the “pulsar term”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This can be written as ∆s+,×(t) = s+,× (tp) − s+,×(t), (7) where t and tp represent the time when the GW passes the Earth and the pulsar, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These times can be related geometrically by tp = t − L(1 + ˆΩ · ˆu), (8) where ˆu is the line of sight vector to the pulsar and L is the distance to the pulsar (see section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 for further discussion of this value).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For a circular binary at zeroth post-Newtonian (0-PN) order, s+,× can be written as s+(t) = M5/3 dLω(t)1/3 � − sin 2Φ(t) � 1 + cos2 ι �� , s×(t) = M5/3 dLω(t)1/3 [2 cos 2Φ(t) cos ι] , (9) where ι is the inclination angle of the SMBHB, dL is the luminosity distance to the source, ω(t) and Φ(t) are the time-dependent angular orbital frequency and phase, respectively, and M ≡ (m1m2)3/5 / (m1 + m2)1/5 is a combination of the two black hole masses known as the chirp mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Again, note that the forms of these signals have been reorganized compared to those used in Aggar- wal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' due to the rotated frame of the antenna pattern functions now in use, they are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The variables M and ω refer to the redshifted values of these quantities, which relate to the rest-frame versions Mr and ωr as Mr = M 1 + z , ωr = ω(1 + z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (10) However, PTAs are currently only sensitive to individual SMBHBs in the local universe where (1 + z) ∼ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For a CW, the initial orbital angular ω0 frequency is related to the GW frequency by ω0 = πfGW, where ω0 = ω(t0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For this search, we define the reference time t0 as MJD 57933 (2017 June 29), the last observation date for the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The time-dependent orbital phase and frequency of the binary are given by Φ(t) = Φ0 + 1 32M−5/3 � ω−5/3 0 − ω(t)−5/3� , ω(t) = ω0 � 1 − 256 5 M5/3ω8/3 0 t �−3/8 , (11) where Φ0 refers to the initial orbital phase (Arzouma- nian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To account for the evolution of high chirp mass binaries over our observations, rather than assuming that there is no frequency evolution, we use the full expression for ω(t) as in Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Noise Model For each individual pulsar, we model both white and red noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We use a white noise model that is identi- cal to that used in previous NANOGrav analyses, using three parameters: EFAC, EQUAD, and ECORR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' EFAC scales the template-fitting TOA uncertainties induced by finite pulse signal-to-noise ratios by a multiplicative factor, EQUAD adds white noise in quadrature, and ECORR describes white noise that is correlated across TOAs derived from data collected simultaneously (Lam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For consistency with previous NANOGrav analyses, to model individual pulsar red noise, the noise spectrum is divided into 30 linearly spaced bins, ranging from 1/Tobs to 30/Tobs, where Tobs is the total observation baseline for each pulsar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Then, the power spectral density of the red noise is fit to a power-law model as in Shannon & Cordes (2010);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2017), where P(f) = A2 red 12π2 � f fyr �−γred yr3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (12) Here, fyr ≡ 1/(1 year), Ared is the red noise amplitude, and γred is the power law spectral index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The prior on Ared is log-uniform in the range [−20, −11], while the prior on γ is uniform in the range [0,7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As mentioned above, for the first time, a CRN signal is now detectable in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Because of this, we included a CRN term in our signal model for a portion of our analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The results of searches that only model a CW necessitated this addition, and are described in detail in section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The power spectral density of the CRN P(f) = A2 CRN 12π2 � f fyr �−γCRN yr3, (13) 6 The NANOGrav Collaboration takes the same form as that of the pulsar red noise in Equation 12, but with an amplitude ACRN and spectral index γCRN that are common to all of the pulsars in the array.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Bayesian Methods We utilized Bayesian inference techniques to deter- mine the posterior distributions of GW parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In previous CW analyses (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ag- garwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), these results were compared to a frequentist metric, the Fp statistic (Ellis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2012) to confirm our key results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, as this method does not currently account for a common process other than a CW in the data, more development will be necessary to produce reliable frequentist results on the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, in this work, we will focus solely on the Bayesian searches, and the frequentist analyses will be presented in a future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In each analysis, we include the BayesEphem model (Vallisneri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020) to account for the uncertain- ties in the Solar System ephemeris, which, as first de- scribed in Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2018), can have large impacts on the computation of GW upper limits with PTAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We used DE438 (Folkner & Park 2018) plus BayesEphem to transform from individual observatory reference frames to an inertial frame centered at the So- lar System Barycenter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As in previous NANOGrav CW searches, we use the enterprise (Ellis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019) package to construct the priors and evaluate the likelihood, which takes the same form as in Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019) and Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Markov Chain Monte Carlo (MCMC) sampler package PTMCMCSampler (Ellis & van Haasteren 2017) was used to explore the parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The CW signal model can be described by nine global parameters: {θ, φ, fGW, Φ0, ψ, i, M, dL, h0}, (14) which describe the circular SMBHB’s: position on the sky (θ, φ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' GW frequency, related to the orbital frequency at some reference time (fGW);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' orbital phase at some reference time (Φ0);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' GW polarization angle (ψ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' orbital inclination (ι);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' chirp mass (M);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' luminosity distance (dL);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' strain amplitude (h0), which is related to the chirp mass, GW frequency, and luminosity distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Since h0 can be defined as h0 = 2M5/3(πfGW )2/3 dL , (15) there is a degeneracy between h0, M, fGW, and dL, and therefore only eight of these parameters are required to fully describe the global CW signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The following types of searches use a variety of prior setups to sample the necessary eight global parameters, and are described below and summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As in Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019), to determine if a CW has been detected by any of our analyses, we first per- formed a detection analysis with the priors described in Table 1, with the key difference between this and upper limit analyses being a log-uniform prior on the strain amplitude of the CW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Then, we calculated the Bayes factor using the Savage-Dickey formula (Dickey 1971), B10 ≡ evidence [H1] evidence [H0] = p (h0 = 0 | H1) p (h0 = 0 | D, H1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (16) Here, H1 is the model with a CW, H0 is the model without one, p (h0 = 0 | H1) is the prior at h0 = 0, and p (h0 = 0 | D, H1) is the posterior at h0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Since H1 and H0 are nested models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', H0 is H1 : h0 = 0), we used the Savage-Dickey formula to estimate p (h0 = 0 | D, H1) as the average fraction of samples in the lowest-amplitude bin in a histogram of h0 samples for a range of bin sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We then computed the one- sigma error on the Bayes factor as σ = B10 √n , (17) where n is the number of samples in the lowest- amplitude bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As with the Bayes factor values, the average error is computed for a range of histogram bin sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Throughout this work, we computed 95% upper limits as the 95th percentile of relevant strain (or chirp mass, for multi-messenger analyses) posterior distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For these analyses, a uniform prior on the strain am- plitude is used, which translates to a linear-exponential (LinExp) prior on log10h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The error on the 95% upper limit, due to the finite number of samples, is calculated as σUL = � x(1 − x)/Ns p � h0 = h95% 0 | D �, (18) where x = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='95 and Ns is the number of effective samples in the MCMC chain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' All-Sky Searches To search for GWs from SMBHBs located in any di- rection, we use uniform priors on the source sky position (cos θ, φ), as well as the cosine of the source inclination cos ι, polarization angle ψ, and GW phase Φ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We used log-uniform priors on h0 for detection analyses, and uni- form priors on h0 for upper limit analyses, so as to set NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 7 the most conservative upper limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For both analysis types, priors on log10(h0) span the range [−18, −11], which accounts for an over-conservative range around the sensitivity of the most recent data sets (order −15), and the minimum of which is well below our sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We performed many searches at fixed values of fGW, to evaluate detection statistics and our sensitivity across the entire nanohertz GW band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The lowest frequency value was set by the time span of our data set, fGW = 1/(12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='9 years) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 × 10−9 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The highest fre- quency value is limited by the observation cadence of our data (approximately one observation per 2–4 weeks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However SMBHBs at that frequency, at the mass range where their strains would be large enough to be detectable by PTAs, have exceedingly short inspi- ral timescales (a few weeks up to ∼ 3 months).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Thus, they are unlikely to be detectable in our data set (Islo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, we set our maximum frequency to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='178 × 10−7 Hz (equivalent to one GW cycle every ∼ 36 days and a GW inspiral time of ∼ 34 days).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This is the same high-frequency cutoff value used in Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For most of the frequency band, we searched over log10(M/M⊙) with a log-uniform prior with a range of [7, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, for very high-frequency sources, we limit the maximum value of the prior to account for high-chirp-mass binaries never emitting GWs at the highest frequencies in our band, as they will have merged prior to emitting GWs at the searched frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This cutoff is relevant at fGW ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='913times10−7 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Assum- ing binaries merge when the orbital frequency is equal to the innermost stable circular orbit (ISCO) frequency, M must satisfy Mmax ≤ 1 63/2πfGW � q (1 + q)2 �3/5 , (19) where q is the SMBHB mass ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here, we calculated the chirp mass cutoff for q = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Sky Map Due to the non-uniform distribution of pulsars on the sky, the NANOGrav PTA is not equally sensitive in all directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To analyze the differences in sensitivity, once detection analyses were completed, we placed upper lim- its on 768 pixels distributed isotropically across the sky using healpy (G´orski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Zonca et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' each pixel covers an area of 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='72 square degrees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This value is chosen to optimize healpy’s requirements for map transformations with our desired resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We allowed the sampler to search a uniform prior across each of the 768 pixels, so as to still sample the entire sky across the entire analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Due to the large computational cost required to con- duct 768 independent runs, the sky map is created at only a single frequency, and only upper limits are com- puted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We selected 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz, as it was the most sensitive in the sky-averaged analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As this is in the low-frequency regime where we expect the inclusion of the CRN to be significant, it is included in our signal model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' All other modeling is done identically to sec- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1, and is summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Targeted Search In addition to the two variations of searches described above, we also perform a targeted search for two known SMBHB candidates, 3C 66B and HS 1630+2355.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rather than a search for a generic SMBHB within a nearby galaxy cluster, as was done in Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019) and Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021b), here, we targeted these binary candidates directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 3C 66B was the subject of Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020b), and was first identified be- cause of observed orbital motion in the AGN core (Su- dou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here, we were able to provide an up- dated analysis with the addition of new data included in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' HS 1630+2355.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' was first iden- tified as a periodic quasar in (Graham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015), and was identified as a top PTA CW candidate in Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021) due to it’s location near our best-timed pulsars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For the targeted search, we perform detection and up- per limit analyses in the same way as in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1, with a few differences in the model priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Because we know the sky location and luminosity distance to 3C 66B, as well as a frequency estimate, these parame- ters are set to constants in this search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This allows us to place constraints directly on the (observer-frame) chirp mass of the binary, rather than its GW strain amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For a detection analysis, the prior on log10 � M/M⊙ � is log-uniform in the range [7, 10], while for upper limit analyses, the prior is uniform over this range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The re- maining priors are identical to the above analyses, and are summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pulsar Distance Priors In this work, we adopted a data-driven approach to handle the large uncertainties on pulsar distance mea- surements, which, in addition to a phase at each pulsar, affect the modeling of the pulsar terms of the CW sig- nal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As in previous searches, the pulsar distance was used as a free parameter in the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This allowed us to marginalize over the pulsar distance, and avoid in- correct modeling of the signal at the the location of the pulsar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In previous versions of this search (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' the pulsar distance prior was constructed from a Gaussian scaled to the 8 The NANOGrav Collaboration All-Sky Sky Map Targeted Analysis Type Detection Upper Limit Upper Limit Detection Upper Limit CRN Y/N Y/N Y Y/N Y/N log10h Uniform(–18,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='–11) LinExp(–18,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='–11) LinExp(–18,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='–11) – – log10M Uniform(7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='Mmax) Uniform(7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='Mmax) Uniform(7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='Mmax) Uniform(7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='Mmax) LinExp(7,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='Mmax) log10dL – – – Constant Constant log10fGW Constant (many) Constant (many) Constant (single) Constant Constant φ Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(pixel) Constant Constant cos θ Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(pixel) Constant Constant ψ Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='π) Φ0 Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) Uniform(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2π) cos ι Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Uniform(–1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1) Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CW parameter priors for each analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' parallax distance and associated uncertainty listed in Verbiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' if no distance was listed, a value of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 kpc was assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While this assumption is reasonable while placing upper limits (see discussion within Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b), as the PTA reaches sensitivities where a detection is nearly possible, an im- provement was needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In this work, every pulsar distance prior was con- structed from a measurement or estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' If a pulsar had a significant independent parallax measurement1, such as from Very Long Baseline Interferometry (VLBI), or timing parallax measured in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set, this value was used to construct a prior on pulsar dis- tance (L).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' p(L) = 1 √ 2πσϖL2 exp �−(PX − L−1)2 2σ2ϖ � , (20) which inverts the approximately Gaussian shape of a parallax prior to describe the prior for distance (Vige- land & Vallisneri 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here, significance was defined by the parallax measurement (ϖ) having an associated uncertainty (σϖ) of less than 30%, so as to avoid the introduction of any errors due to the Lutz-Kelker bias (Lutz & Kelker 1973).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' If multiple measurements of suffi- cient quality existed, these values and uncertainties were combined with a weighted average before being used to construct the parallax-distance prior, which ensures that the highest-quality measurements contribute the most to the resulting prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1 http://hosting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='astro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='cornell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='edu/research/parallax/, with values compiled from Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jennings et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Deller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Guillemot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Stovall et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Abdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Freire et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Verbiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lazaridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hotan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2006);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lom- men et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2006);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jacoby et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2005);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Splaver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2005);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L¨ohmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2004);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Toscano et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (1999);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Camilo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (1994) If there are no parallax measurements that could be used to calculate the pulsar’s distance, the pulsar’s dis- persion measure (DM) was used to construct a distance estimate using NE2001 (Cordes & Lazio 2002) and, sub- sequently, the distance prior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Since these values are only an estimate, we constructed a broad, nearly uni- form prior for the DM-distance value and a 20% un- certainty (Cordes & Lazio 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jones et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016), with the shape p(L) = � � � � � Half − Gaussian if L < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='8 LDM Uniform if 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='8 LDM ≤ L ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 LDM Half − Gaussian if L > 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 LDM (21) Here, the Half-Gaussian additions have standard de- viations of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='25× the DM-distance uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Unlike a sharp boundary, these additions allowed the sampler to move into the edges of this prior range, which accounted for any differences in distance estimates by alternative electron density models, such as Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While pulsar distance priors will still only induce minor in- fluences on the results of an upper limit analysis (Ar- zoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b), by constructing new priors to accurately handle pulsar distance measurements and es- timates, we have prepared our methods for a future de- tection of a CW, which will be more reliant on the pulsar term of the signal than upper limit evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These values and the priors used are compiled in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' RESULTS 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' All-Sky Searches For each GW frequency in our search, we performed a detection analysis on the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data which marginal- ized over the source sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Figure 1 shows the Bayes factor for a CW at each searched GW frequency in purple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' It is important to note the large Bayes fac- tor for fGW = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 × 10−9 Hz (the lowest frequency NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 9 10−8 10−7 fGW (Hz) 1 10 B10 CW+CRN CW Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Savage-Dickey Bayes factors for a CW at each GW frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At low frequencies, inclusion of a CRN in the model (red) is necessary to avoid a false CW detection as in the CW-only model (unfilled purple).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Square markers indicate a frequency where the initial analysis returned an undefined Savage-Dickey Bayes Factor, meaning the zoom-in analysis was necessary to calculate an accurate Bayes factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' With these methods, we found that no CWs are detected in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' analyzed), with a steady decrease in the following four frequency bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ordinarily, this would be a first indi- cation for the detection of a CW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, given the strong evidence for the existence of a CRN process in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020a), it is clear that this signal appears to be of similar form;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' that is, what we have detected is bright at low frequencies and declines toward higher frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Once a common red- noise process is added to the model, with the log10ACRN and γCRN parameters fixed to the maximum likelihood values (−15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='80 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='08, respectively) found by a search analogous to Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020a), the Bayes fac- tors for a CW at low fGW return to < 1 (leftmost red points in the figure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, throughout this paper, we will present the results of many analyses with a fixed CRN included in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We note that a few frequencies above fGW = 1×10−7 Hz have B10 values that are returned as undefined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' How- ever, upon inspection, this is due to poor sampling in a few frequency bins, where the sampler does not ex- plore low strain values, rather than a detection of a CW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This occurs in areas of parameter space where the like- lihood is particularly complex and difficult to explore in a finite run-time due to the numerous noise sources at fGW > 1 × 10−7 Hz, such as covariances between the CW likelihood with pulsar binary orbits and poten- tial unmodeled red noise above the 30-frequency power law cutoff (Chalumeau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022) Therefore, a few ele- vated Bayes factors are not unexpected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To mitigate this effect, we adapt the methodology described in Chatzi- ioannou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2014) to use a second MCMC analysis to “zoom in” on the low end of the strain prior range by limiting the prior to the 10th percentile of the origi- nal posterior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, the posterior height at h0 = 0 becomes p (h0 = 0 | D, H1) = n2 N2 n1 N1 1 dh, (22) with fractional uncertainty � 1 n1 + 1 n2 , (23) where N1 is the number of samples in the initial run and n1 is the number of samples in the focused region (defined as the 10th percentile of the initial run).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Then, N2 is the number of samples in the focused run, with n2 of those samples located in the lowest-amplitude bin of width dh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' After this procedure, all frequencies have Bayes factor values of B10 ≲ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The only frequency that needed this treatment for both the CW and CW+CRN models is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='763 × 10−7 Hz, which resulted in a Bayes factor of 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='43 in the CW+CRN case, and 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='79 in the CW-only case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While we inspected our analyses at this frequency with extra care, these Bayes factors are still relatively low com- pared to those required to claim a detection, especially since binaries at these high frequencies are expected to be quite rare (Kelley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B´ecsy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For comparison, evidence in favor of a given model is generally not considered strong for Bayes factors ≲ 100 (Kass & Raftery 1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, we will monitor this frequency in future data sets, but currently, our analy- ses indicate that no CWs are detected in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As we found no strong evidence for a GW from an in- dividual SMBHB in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set, we proceeded to place all-sky upper limits on the GW strain, with re- sults shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We again conduct this analysis using two different models, one which includes only a CW (purple) and one which includes both a CW and a CRN process (red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While in both cases, the most sensitive frequency (that with the lowest strain upper limit) is 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65×10−9 Hz, the strain upper limits are lower when the CRN is included in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In this case, we can limit the strain to h0 < (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='82 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='35) × 10−15, while when the CRN is neglected, the best limit we can place on CW strain is h0 < (9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='10) × 10−15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This trend of the CW+CRN model resulting in lower upper limits than a CW-only model continues until frequen- cies of approximately 1 × 10−8 Hz, above which, where the effect of the power-law CRN is minimal, the upper limit values are nearly equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, throughout the remainder of this work, we opted to include the CRN 10 The NANOGrav Collaboration 10−8 10−7 fGW (Hz) 10−14 10−13 10−12 GW Strain Upper Limit 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year, CW 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year, CW+CRN Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' All-sky CW strain 95% upper limits and associated error regions, with (red) and without (purple) a CRN included in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At low frequencies, modeling the CRN is necessary to avoid over-estimating our strain upper limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We are the least sensitive to CWs at fGW =1/(1 year) due to the Earth’s orbit, creating the large feature seen in this and other figures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 10−8 10−7 fGW (Hz) 10−14 10−13 10−12 10−11 GW Strain Upper Limit 5-year 9-year 11-year 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year, fixCRN Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The upper limits on CW strain are continuing to decrease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set (red curve and error region) is more sensitive than the 11-year, 9-year, and 5-year (blue, orange, and blue curves, respectively) at high frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At the most sensitive frequency of fGW =7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz, the CRN is impeding further sensitivity improvements, and up- per limits are comparable between the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year and 11-year data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At frequencies greater than fyr, the NANOGrav’s sensitivity has improved by a factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='40 since the 11-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in analyses which are too computationally expensive to be completed with both models, such as the sky map analyses described in section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In Figure 3, we compare this result to those of pre- vious NANOGrav searches for CWs (Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While analyses have shown a factor of ∼ 2 im- provement between the previous three data sets, we see only a modest sensitivity improvement between the 11- year and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data, with only a factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='07 be- tween the two lowest strain limits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In addition to the smaller fractional increase in observing baseline between the 11- and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data sets as compared to previous data sets, this is likely due to the presence of the CRN, which, while it is no longer causing a false positive in the CW search if included in the model, does represent a significant noise process that will limit our sensitivity to low-frequency CWs over the years to come (Hazboun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To confirm this hypothesis, we calculated the sensitiv- ity curves of the 9-, 11-, and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data sets using each pulsar’s red and white noise contributions and tim- ing model with hasasia (Hazboun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019a,b) and calculated the relative improvement of in sensitivity be- tween each data set at high frequencies (> fyr), where red noise has little effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We observed that on average, the hasasia-calculated sensitivity at these frequencies improved by a factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='28 between the 9- and 11-year data sets, and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='24 between the 11- and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In our full Bayesian analysis, our upper limits at frequencies above fyr improved by a factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='52 be- tween the 9- and 11-year data sets, and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='40 between the 11- and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These proportionalities are even greater than our calculated improvements, so we are able to conclude that NANOGrav’s sensitivity to CWs is improving as expected at high frequencies where red noise is not dominant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Sky Map In Figure 4, we show the GW strain upper limits for a model including a CRN at the most sensitive CW fre- quency fGW = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz as a function of sky NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 GW Strain Upper Limit ×10−14 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Map of CW strain 95% upper limits at fGW = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz, the most sensitive frequency searched, for the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pulsar locations are shown as white stars, with new pulsars added from the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set outlined in red.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The most sensitive pixel is marked with a red dot, and is located at an RA of 19h07m30s and a Dec of −30◦00′00′′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In this region, where the our best-timed pulsars lie, our upper limits are nearly an order of magnitude more sensitive than the least sensitive pixel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As expected, the portion of the sky that is the least sensitive to CWs is that which contains the fewest pulsars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At the most sensitive pixel, the strain upper limit is h0 < (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15) × 10−15, while at the least sensitive pixel, h0 < (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='12 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='05) × 10−14, a range of sensitivities that varies by a factor of ∼ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In Figure 5, we compare the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year CW strain map to that constructed in Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019) for the 11-year data set by plotting ∆h95 = h95,12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 − h95,11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While a portion of the sky shows a significant reduction in strain upper limits, many pixels show an increase in strain upper limit, indicating a loss of sensitivity in the newest data set for much of the sky at our most sensitive frequency, including in the most sensitive area of the sky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To investigate the cause of this apparent sensitivity loss, we conducted an analysis of the simulated data utilized in Pol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We selected portions of the data set with included pulsars and observation baselines corresponding to the 11- and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data sets that also included a CRN corresponding to that found in Ar- zoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Then, we conducted identical upper limit analyses for an equatorial slice of sky pixels (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', for the pixels with θ ∼ π/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' When plotted against φ in Figure 6, the patterns in ∆h95 in the real data are well within the range represented by the same analysis in the 10 simulated data sets, each containing a different realization of the CRN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The mean value of ∆h95 across each included pixel is nearly identical for the real data −4 −2 0 2 4 ∆h95 ×10−15 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Difference in strain 95% upper limits for the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5- year data set versus the 11-year data set at our most sensitive frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Blue pixels indicate a decrease in upper limit, while red pixels indicate an increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The overall increase in upper limit across much of the sky at the most sensitive frequency was found to be due to the presence of the CRN, and is consistent with the all-sky limit shown in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' and the simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Together, this allows us to confi- dently state that this apparent pattern in our evolving sensitivity across the sky is due to the emerging CRN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ASTROPHYSICAL LIMITATIONS OF NEARBY SMBHBS In recent years, numerous studies have modeled the SMBHB population in the nearby universe (Simon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rosado & Sesana 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Mingarelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Ar- 12 The NANOGrav Collaboration 0h 5h 10h 15h 20h RA 0 1 2 3 4 5 6 φ −1 0 1 2 3 ∆h95 ×10−14 Realization Averaged Realization Range 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year - 11-year Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The difference in strain upper limits for an equa- torial slice of the sky map shown in Figure 5 plotted against φ (or RA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The results for the real data (red points) are well within the range of values encompassed by the ten real- izations simulated (blue), with near-identical mean values of ∆h95 (horizontal red and blue lines).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, we conclude that the overall increase in upper limit across much of the sky at our most sensitive frequency is due to the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set’s sensitivity to the CRN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' zoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021b) and multiple SMBHB candi- dates have been discovered with electromagnetic tech- niques (Sudou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Graham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lehto & Valtonen 1996;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Charisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Even without a CW detection, our lim- its can add crucial insights into SMBHH populations, including limiting the distance to nearby SMBHBs and placing multi-messenger mass constraints on SMBHB candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Distance Limits Our limits on CW strain can be transformed using Equation 15 to calculate the 95% lower limit on the luminosity distance to a source of a given chirp mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The distance limits for an SMBHB with M = 109M⊙ are shown in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For the most sensitive frequency of fGW = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz, we can limit the distance to an SMBHB with M = 109M⊙ to dL > 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='85 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These limits may be scaled to larger or smaller SMBHBs directly using Equation 15 as D95,M = D95,109M⊙ × � M 109M⊙ �5/3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (24) However, it is important to note that while this fre- quency produces the lowest strain upper limit, it does not produce the farthest luminosity distance lower limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This value is dL > 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='99 Mpc at fGW = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='817 × 10−8 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This technique can be applied to the strain upper limit sky map as well, to calculate the 95% luminos- 10−8 10−7 fGW (Hz) 10−1 100 101 � M 109M⊙ �5/3 × D95 (Mpc) 5-year 9-year 11-year UL 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year, CW+CRN Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The 95% lower limits on the luminosity distance to an individual SMBHB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While we can limit SMBHBs emit- ting GWs at the most sensitive value of fGW = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65×10−9 Hz to dL > 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='85 Mpc, at fGW = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='817 × 10−8 Hz, they can be limited to farther away at dL > 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='99 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 30 40 50 60 70 80 � M 109M⊙ �5/3 × D95 (Mpc) Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Map of the 95% lower limit on the distance to individual SMBHBs with M = 109M⊙ and 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' White diamonds indicate the positions of known SMBHB candidates and large galaxy clusters that could contain an SMBHB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As PTA sensitivities improve, these candidates may come into reach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ity distance lower limit for an SMBHB emitting CWs at fGW =7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz as a function of sky loca- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The results of this transformation are shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At the most sensitive sky location, we can limit the minimum distance to an M = 109M⊙ SMBHB to be dL > 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 Mpc, and that to an M = 1010M⊙ SMBHB to dL > 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02 Gpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In the least sensitive sky location, we can limit the minimum distance to an M = 109M⊙ SMBHB to be dL > 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='50 Mpc, and that to an M = 1010M⊙ SMBHB to dL > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='95 Gpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These values vary by over a factor of 4 between the most and least sensitive parts of the sky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SMBHB Number Density Limits NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 13 10−8 10−7 fGW (Hz) 10−6 10−3 100 103 106 Number Density Upper Limit (cMpc−3) Chirp Mass [log10(M/M⊙)] 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 10 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1 fGW (yr−1) Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Number density limits of SMBHBs per comoving Mpc−3 with chirp masses of 108 (blue), 108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 (orange), 109 (green) and 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 (red) solar masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We placed significantly more stringent upper limits on the largest SMBHBs than the smallest ones in the local universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Using our limits on the luminosity distance to an SMBHB, we can also place limits on the local num- ber density of SMBHBs of a given binary configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' After placing a lower limit on the effective comoving distance dc to sources of given binary parameters, we can say the local density is less than nc = 1/Vc = [(4/3)πd3 c]−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, to consider this as a limit on the average density in some volume, that is relatively- local but larger than the explicitly measured volume, there should be some additional pre-factor to account for the confidence of having a source within this volume based on Poisson distributions of sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For a num- ber of events Λ = ncVc the likelihood of no detections is P0(Λ) = e−Λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To find an upper-limit on the occurrence rate, ΛUL, we must integrate from that limit to infin- ity, such that the result matches our desired confidence level p0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, FUL(ΛUL) = � ∞ ΛUL e−ΛdΛ = 1 − p0 is solved as nul = − ln(1 − p0) Vc .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (25) Here, our desired confidence level is p0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' To cal- culate the co-moving distance dc, we transform our lu- minosity distance limits (shown in Figure 7) as dc = dL/(1 + z), and z is calculated for the relevant luminos- ity distance values using the astropy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The results of this calculation are shown for various SMBHB chirp masses in Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As can be expected, we find that we can place more constraining upper limits on large SMBHBs (M = 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5M⊙) than smaller ones (M = 108M⊙) in the local universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Multi-Messenger Analyses Using the methodology described in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3, we conducted a multi-messenger search for GWs from the 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 M/M⊙ ×109 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 Posterior ×10−9 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Upper Limit = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='34×109M⊙ 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Upper Limit, w/ CRN = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='41×109M⊙ 11-year Upper Limit = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 109M⊙ Iguchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2010) M Figure 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Posterior distributions for a targeted upper limit analysis of the SMBHB candidate 3C 66B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While 95% upper limits (red and purple lines) are lower than in the 11-year data set (blue line), they cannot rule out the model from (Iguchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2010) (orange region).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SMBHB candidate 3C 66B to provide an update to the results of (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The detection analyses result in nearly identical Savage-Dickey Bayes factors, whether the CRN was included or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This is to be expected, as the CRN is very weak at frequen- cies as high as that of 3C 66B (fGW = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='04 × 10−8 Hz).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Bayes factors for the CW-only analysis and the CW+CRN analysis are 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='70 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='67 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Both of these values are very near 1, mean- ing that the data do not indicate the presence of a CW corresponding to a binary within 3C 66B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Because no GW was detected, we constrain the chirp mass of a potential binary with an upper limit analysis, again performed with and without a CRN to confirm consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The posteriors from these two searches are plotted in Figure 10, with resulting 95% upper limits of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='41 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02) × 109M⊙when a CRN is included, and (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='34±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01)×109M⊙when only CWs are included in the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For comparison, the 95% chirp mass upper limit for 3C 66B from the 11-year data set was 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65×109M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This represents an improvement of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4×108M⊙, or a fac- tor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 smaller;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' by adding pulsars, extending timing baselines, and improving timing and searching methods, the PTA’s sensitivity has clearly improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These up- per limits are nearer to the value of the upper bound of the Iguchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2010) chirp mass estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In subse- quent data sets, or by using more sophisticated analyses such as advanced noise modeling (Simon & Hazboun in prep), this error region may soon be within reach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020b), it was shown that a targeted search, like this analysis, results in a factor of 14 The NANOGrav Collaboration ∼ 2 reduction in upper limits compared to those of an all-sky search at a corresponding GW frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' When converted to strain amplitudes rather than chirp masses, the 95% upper limits are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='90×10−14 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='74×10−14 for the searches with and without a CRN, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In comparison, the all-sky analysis in section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1 returned strain upper limits of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='56 × 10−14 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='82 × 10−14 at 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01times10−8 Hz, the nearest frequency to that of 3C 66B at 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='04times10−8 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' These all-sky strain up- per limits are a factor of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='88 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='20 larger, very sim- ilar to the value for the 11-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, the improvement in upper limits gained by using this multi-messenger technique has stayed stable across the addition of new pulsars, more data, and the emergence of the CRN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Additionally, we performed a new search for the elec- tromagnetic SMBHB candidate HS 1630+2355.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' First identified as a periodic quasar in Graham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2015), this candidate is identified as a top PTA CW candidate in Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021) with a gravitational wave frequency of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='13×10−8 Hz and a luminosity distance of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='26 Gpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set, we do not detect any CWs from HS 1630+2355;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in a CW+CRN analysis (necessary due to the low GW frequency), we calculate a Bayes factor of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='74±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Then, we are able to set an upper limit of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='28±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='03)×1010M⊙ on the chirp mass of an SMBHB within HS 1630+2355, which corresponds to a strain of 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='03 × 10−15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' For comparison, the all-sky upper limit at the nearest frequency of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='10×10−8 Hz is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='07×10−14, a factor of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='66 larger than the targeted upper limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Due to this candidate’s favorable position near the PTA’s most sensitive sky location, we are able to overcome the much larger source distance to set a constraining upper limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, this limit is still approximately 4 times larger than the estimated chirp mass of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15 × 109M⊙ (Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021), meaning that more data are needed to rule out or detect an SMBHB within HS 1630+2355.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Local Detection Prospects At the most sensitive sky pixel, we conducted a final upper limit analysis across the entire frequency band, with results plotted in Figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here we observed that for all frequencies, the PTA is dramatically more sensi- tive to CWs from sources at this sky location than across the entire sky on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Mingarelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2017) carried out a comprehensive study of the detection prospects of SMBHBs within a 225 Mpc volume, the completeness limit for their chosen K-band luminosity in 2MASS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Us- ing these new 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year upper limit curves, we assess our level of surprise at our current non-detection of CWs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Figure 11 shows an example realization of the local SMBHB population created with nanohertz gws (Min- 10−8 10−7 fGW (Hz) 10−16 10−14 10−12 GW Strain Upper Limit All-Sky Best Sky Location Figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The 95% strain upper limit curve for the all-sky (solid red) CW search compared with the 95% strain upper limit curve in the most sensitive sky location (red dashed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The non-detection of a nearby SMBHB is unsurprising – there was at best a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5% chance of making such a detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here we show a one of the 398 realizations of the local Uni- verse from Mingarelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2017) that shows a detectable SMBHB together with our 95% upper limit curves for both sky-averaged and best sky locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In this realization there are 87 local SMBHBs (all within 225 Mpc);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' none of them lie above the sky-averaged upper limit curve, but one could be detected if it were at the most-sensitive sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' garelli 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' It is one out of 75,000 Monte Carlo re- alizations Mingarelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2017) carried out, where they varied black hole masses via the scatter in vari- ous M − Mbulge relations, mass ratios, and more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While the chosen realization shows what a detectable SMBHB would look like, on average we found only 398 realiza- tions out of the 75,000 contained detectable SMBHB systems at the best sky location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We therefore only had a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5% chance of making a detection of such a lo- cal source with the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Furthermore, when we consider the entire sky, we found an order of magnitude fewer SMBHBs were detectable – only 43 re- alizations contained detectable binaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' It is interesting to compare this result to that of our previous upper limit (Aggarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' With the NANOGrav 11-year all-sky upper limits, we found 34 detectable SMBHBs and here we find 43 — an overall improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' However, the upper limit at our best sky location has deteriorated due to the CRN, which has in turn decreased the number of detectable binaries by a factor of ∼ 2, from a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2% chance of detection to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' As was the case in previous sections, we note that this deterioration is happening primarily at low frequencies where the CRN is manifesting in the data, and the most sensitive sky location is heavily affected (Figure 5 and Figure 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021) show that at higher GW frequencies the effect of the GWB, or any equivalent CRN, is very small, so the detection prospects for local SMBHBs are unaffected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 15 12 10 8 6 4 log10( BHB[dex 1 cMpc 3]) 8 9 10 log10( /M ) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 z A ll S k y B e st 10 5 log10( BHB) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 log10( BHB) Figure 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The SMBHB mass function (φBHB) de- rived from astrophysical models shows the modeled num- ber density of SMBHBs (color-bar) across log chirp mass (log10M/M⊙) and redshift (z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Side panels show φBHB in one dimension integrated across each respective variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Re- gions that are inconsistent with our 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year CW search are shown in white, with the all-sky (average) and most-sensitive (best) sky location upper limits shown under the solid and dash-dotted white curves, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Created using meth- ods from from Casey-Clyde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (in prep).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Binary Population Model Consistency Finally, it was also useful to assess whether our cur- rent non-detection of CWs is consistent with expecta- tions from SMBHB population models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In Figure 12 we compared an astrophysically-motivated SMBHB model to GW upper limits set with the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year CW search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The SMBHB model was derived from theoretical galaxy major merger rates (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), which are them- selves based on observed galaxy pair fractions (Mundy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017) and theoretical galaxy merger timescales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' It is related to the GWB via (Phinney 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Sesana 2013) h2 c(f) = 4 3π 1 f 4/3 �� φBHB(M, z) M5/3 (1 + z)1/3 dMdz, (26) where hc is the characteristic strain of the GWB and M is the chirp mass in the observer frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This was fit to the results of the NANOGrav search for the GWB in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set (Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020a), and assumes the CRN is due to a GWB, comparable to the fit in Middleton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The GW limits in Figure 12 were calculated using the most sensitive frequency of both the all-sky and most- sensitive sky location analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Figure 12 thus shows what regions of z–M parameter space were accessible to the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year CW search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Since no CWs were de- tected, we are able to rule out the high-mass and low–z region across the entire sky and at the most sensitive sky location for the PTA’s most sensitive frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We calculate the expected number of detectable SMB- HBs by relating the differential SMBHB mass function φBHB to the differential number of binaries per chirp mass, frequency, and redshift (Sesana et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2008) as d3N d log Mdzdf = d2φBHB d log Mdz dV dz dz dtr dtr dfr dfr df , (27) and integrating across the relevant region of z – M space, while also considering the entire strain sensitiv- ity curve in frequency space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Here, tr and fr are the proper time and binary gravitational wave frequency in the SMBHB’s rest frame, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We find in both cases that the expected number of SMBHBs is ≪ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' At the all-sky sensitivity, the calculated number is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='6+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 × 10−4, while at the most sensitive sky loca- tion, the calculated number is 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='6+12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='9 −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 ×10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Our non- detection of a CW signal is thus consistent with theo- retical models of the SMBHB population, which predict that the most massive, and therefore loudest, SMBHBs are exceedingly rare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' DISCUSSION AND FUTURE PROSPECTS While the NANOGrav PTA is continuing to increase our sensitivity to GWs by adding data from ongoing ob- servations and adding new pulsars to the PTA, our limits on CW strains across the nanohertz GW frequency band and the sky have not improved as steadily as in previ- ous data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This is due to the CRN first detected in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set in Arzoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020a), which has impacted the PTA’s ability to distinguish a CW source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' While adding a CRN to the search model that is fixed to the maximum-likelihood values from a dedicated search avoids confusion in detection analyses, this adds a significant source of noise to the PTA, and therefore limits our sensitivity to CWs at frequencies below 10 nHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We have entered an interesting era where surprising results will continue to be uncovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In future data sets, the CRN will likely be even more apparent in the data, and may eventually resolve to be due to a stochastic GWB from SMBHBs (Pol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In any case, due to the multi-frequency nature of the GWB, this will con- tinue to impact CW searches, and significant efforts will be needed to continue development on methods that will allow for efficient detection of both types of nanohertz GW signals such as in B´ecsy & Cornish (2020), as well as extensive simulations that evaluate detection possi- bilities, as in Pol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021), that include multiple types of GW signal in the simulated data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Addi- 16 The NANOGrav Collaboration tionally, significant effort will be needed to improve sam- pling methods that can efficiently explore the complex CW parameter space (B´ecsy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022a), particularly at high GW frequencies or if full eccentricity modeling is desired (Taylor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016), complexities which will only be exacerbated as data sets expand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' One promising path forward are targeted searches of quasars, which may be much more likely to host SMBHBs than random galax- ies (Casey-Clyde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Since multi-messenger analyses can improve upper limits by a factor of 2 (Ar- zoumanian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b), improve detection prospects (Liu & Vigeland 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Charisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022), and can be made drastically more efficient than traditional all-sky searches (Charisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022), further development of these methods is also crucial, as with more data, elec- tromagnetic SMBHB candidates may soon be detectable (Xin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021), and many more will be identified in upcoming surveys (Charisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Witt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' By balancing these efforts, a CW signal may soon come into reach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CONCLUSIONS With extensive Bayesian analyses, we have searched the NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set for CWs from indi- vidual SMBHBs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In our detection analyses, we showed that no CWs were detected to a high degree of confi- dence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We then placed all-sky upper limits on the strain amplitude for all CWs emitting between 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 × 10−9 Hz and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='19×10−7 Hz, as well as upper limits as a function of sky location for the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set’s most sensitive frequency of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This analysis also included the development of new methods to accurately reflect the realistic distribution of possible values of pulsar distances from updated mea- surements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The way we treat these values in search pipelines has a significant impact on our ability to de- tect the pulsar term of a CW signal, and these methods will be critical as we proceed towards PTA sensitivities that enable a CW detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Unlike previous data sets, the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set con- tains a significant CRN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Therefore, for the first time, we included the CRN in our Bayesian searches by fixing the model parameters to those recovered in Arzouma- nian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This had a significant effect on the results of many of our analyses, and proved critical to avoid a false detection of a CW at 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 × 10−9 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This process also significantly impeded the reduction of our upper limits limits between the 11-year and 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year NANOGrav searches at the most sensitive frequency of 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz in most areas of the sky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Despite these new necessities, we are able to place sig- nificant astrophysical constraints on the local SMBHB population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' In our most sensitive sky location, we can rule out the existance of any SMBHB with a mass of at least 109M⊙ emitting at 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 × 10−9 Hz within 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Furthermore, we demonstrate significant improvements to chirp mass upper limits of SMBHB candidates can be made through multi-messenger anal- ysis techniques, and limit the chirp mass of 3C 66B to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='34±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01)×109M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' With the inclusion of more data, we will soon be able to rule out or confirm this source and other binary candidates, as well as those that are yet undiscovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ACKNOWLEDGEMENTS Author contributions: An alphabetical-order author list was used for this paper in recognition of the fact that a large, decade timescale project such as NANOGrav is necessarily the result of the work of many people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' All authors contributed to the activities of the NANOGrav collaboration leading to the work presented here, and reviewed the manuscript, text, and figures prior to the paper’s submission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Additional specific contributions to this paper are as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ZA, HB, PRB, HTC, MED, PBD, TD, JAE, RDF, ECF, EF, NG-D, PAG, DCG, MLJ, MTL, DRL, RSL, JL, MAM, CN, DJN, TTP, NSP, SMR, KS, IHS, RS, JKS, RS and SJV developed the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year data set through a combination of obser- vations, arrival time calculations, data checks and re- finements, and timing model development and analy- sis;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' additional specific contributions to the data set are summarized in Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CAW coordinated the writing of the paper and led the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' BB, ARK, NSP, JSy, GW, and CAW performed analyses for the project, including exploratory runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' JS and CAW devel- oped methods to include the CRN in the search model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' AB, NG-D, JG, KG, SRT, SJV, and CAW proposed for the necessary XSEDE resources to complete these analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' NSP and CAW performed the sky map simu- lations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' AC-C, LZK, CMFM, and CAW developed the astrophysical interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ADJ provided updates to red noise empirical distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' GEF, XS, and SJV explored the frequentist analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SC, DJN, MAM, and CAW updated the pulsar distance priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SBS, CMFM, and CAW wrote the manuscript and produced the fig- ures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We thank BB, SC, JMC, NJC, WF, KG, JSH, DLK, LZK, MTL, TJWL, MAM, DJN, KDO, JDR, SRT, and SJV for their thoughtful comments on the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This work has been carried out by the NANOGrav collaboration, which re- ceives support from National Science Foundation (NSF) Physics Frontiers Center award numbers 1430284 and 2020265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Arecibo Observatory is a facility of the NSF operated under cooperative agreement (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' AST- NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 17 1744119) by the University of Central Florida (UCF) in alliance with Universidad Ana G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M´endez (UAGM) and Yang Enterprises (YEI), Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Green Bank Obser- vatory is a facility of the NSF operated under cooper- ative agreement by Associated Universities, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The National Radio Astronomy Observatory is a facility of the NSF operated under cooperative agreement by Asso- ciated Universities, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SBS and CAW were supported in this work by NSF award grant Nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1458952 and 1815664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CAW acknowledges support from West Vir- ginia University through a STEM Completion Grant, and acknowledges support from CIERA, the Adler Plan- etarium, and the Brinson Foundation through a CIERA- Adler postdoctoral fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SBS is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Uni- verse program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MC and SRT acknowledge support from NSF grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' AST-2007993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' SRT also acknowledges support from an NSF CAREER Award PHY-2146016, and a Vanderbilt University College of Arts & Science Dean’s Faculty Fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CMFM was supported in part by the National Science Foundation under Grants NSF PHY-2020265, and AST-2106552.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Flatiron Institute is supported by the Simons Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Ad- ministration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Portions of this work performed at NRL were supported by Office of Naval Research 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1 fund- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' The Flatiron Institute is supported by the Simons Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pulsar research at UBC is supported by an NSERC Discovery Grant and by the Canadian Insti- tute for Advanced Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' JS and MV acknowledge support from the JPL RTD program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' KDO was sup- ported in part by the National Science Foundation under grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2207267.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' ECF is supported by NASA under award number 80GSFC21M0002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' TD and MTL are sup- ported by an NSF Astronomy and Astrophysics Grant (AAG) award number 2009468.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' LZK was supported by a Cottrell Fellowships Award (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 27985) from the Research Corporation for Science Advancement made possible by the National Science Foundation grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' CHE2125978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' MED acknowledges support from the Naval Research Laboratory by NASA under contract S-15633Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' We acknowledge the use of Thorny Flat at WVU, which is funded in part by the National Science Foundation Major Research Instrumentation Program (MRI) award No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1726534 and WVU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' This work used the Extreme Science and Engineering Discovery Envi- ronment (XSEDE), which is supported by National Sci- ence Foundation grant number ACI-1548562.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Specifi- cally, it used the Bridges-2 system, which is supported by NSF award number ACI-1928147, at the Pittsburgh Supercomputing Center (PSC) (Towns et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Facilities: Arecibo, GBT Software: enterprise (Ellis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), enterprise extensions (Taylor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021), PTMCMCSampler (Ellis & van Haasteren 2017), hasasia (Hazboun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019a), libstempo (Vallisneri 2020), tempo (Nice et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015), tempo2 (Hobbs et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2006), PINT (Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), matplotlib (Hunter 2007), astropy (Price-Whelan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Astropy Collabora- tion et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013), healpy (Zonca et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019), HEALPix (G´orski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2005), nanohertz gws (Mingarelli 2017) APPENDIX A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' PULSAR DISTANCE VALUES REFERENCES Abdo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ajello, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Allafort, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, ApJS, 208, 17, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0067-0049/208/2/17 Aggarwal, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, ApJ, 880, 116, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab2236 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, ApJ, 889, 38, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab6083 Alam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021a, ApJS, 252, 4, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4365/abc6a0 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021b, ApJS, 252, 5, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4365/abc6a1 Antoniadis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Babak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022, MNRAS, 510, 4873, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stab3418 Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brazier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Burke-Spolaor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, ApJ, 794, 141, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/794/2/141 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, ApJ, 821, 13, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/0004-637X/821/1/13 Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brazier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018, ApJ, 859, 47, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/aabd3b Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Blumer, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020a, ApJL, 905, L34, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/2041-8213/abd401 Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brazier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020b, ApJ, 900, 102, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ababa1 18 The NANOGrav Collaboration Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Compiled pulsar distance values and uncertainties for each pulsar used in the 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year CW analysis, along with the parallax (PX) or DM prior identifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Values compiled using measurements from Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jennings et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Deller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Guillemot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Stovall et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Abdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Freire et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Verbiest et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lazaridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2009);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Hotan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2006);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Lommen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2006);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Jacoby et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2005);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Splaver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2005);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L¨ohmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2004);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Toscano et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (1999);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Camilo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (1994) and Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' (2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Pulsar Prior Distance (kpc) Error (kpc) Pulsar Prior Distance (kpc) Error (kpc) B1855+09 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='24 B1937+21 PX 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='64 B1953+29 DM 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='93 J0023+0923 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='82 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='41 J0030+0451 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01 J0340+4130 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='34 J0613-0200 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='13 J0636+5128 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='12 J0645+5158 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='19 J0740+6620 DM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='14 J0931-1902 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='38 J1012+5307 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='83 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='05 J1024-0719 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='14 J1125+7819 DM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='13 J1453+1902 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='23 J1455-3330 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='22 J1600-3053 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='96 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='31 J1614-2230 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='03 J1640+2224 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='23 J1643-1224 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='08 J1713+0747 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02 J1738+0333 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11 J1741+1351 PX 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='62 J1744-1134 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01 J1747-4036 DM 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='7 J1832-0836 PX 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='57 J1853+1303 DM 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='42 J1903+0327 DM 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='49 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3 J1909-3744 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02 J1910+1256 DM 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='47 J1911+1347 DM 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='42 J1918-0642 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15 J1923+2515 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='33 J1944+0907 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='36 J2010-1323 PX 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='71 J2017+0603 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='31 J2033+1734 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='4 J2043+1711 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='12 J2145-0750 PX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='02 J2214+3000 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='31 J2229+2643 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='29 J2234+0611 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15 J2234+0944 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2 J2302+4442 DM 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='24 J2317+1439 PX 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='21 – – – – Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Blumer, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021a, PhRvL, 127, 251302, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='251302 Arzoumanian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brazier, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021b, ApJ, 914, 121, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/abfcd3 Astropy Collaboration, Robitaille, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Tollerud, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, A&A, 558, A33, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1051/0004-6361/201322068 Babak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Petiteau, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, MNRAS, 455, 1665, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stv2092 B´ecsy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Cornish, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, Classical and Quantum Gravity, 37, 135011, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/1361-6382/ab8bbd B´ecsy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cornish, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Digman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022a, PhRvD, 105, 122003, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='122003 B´ecsy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cornish, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Kelley, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022b, arXiv e-prints, arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org/abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01607 Begelman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Blandford, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Rees, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1980, Nature, 287, 307, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1038/287307a0 Benetti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Graef, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Vagnozzi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022, PhRvD, 105, 043520, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='043520 Burke-Spolaor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Charisi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, A&A Rv, 27, 5, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1007/s00159-019-0115-7 Camilo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Foster, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Wolszczan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1994, ApJL, 437, L39, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/187677 Casey-Clyde, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Mingarelli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Trump, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep Chalumeau, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Babak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Petiteau, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022, MNRAS, 509, 5538, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stab3283 Charisi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bartos, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Haiman, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, MNRAS, 463, 2145, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stw1838 Charisi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Runnoe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bogdanovic, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Trump, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022, MNRAS, 510, 5929, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stab3713 Chatterjee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brisken, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Vlemmings, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2009, ApJ, 698, 250, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/698/1/250 Chatziioannou, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cornish, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Klein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Yunes, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, PhRvD, 89, 104023, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='104023 Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Conselice, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, Monthly Notices of the Royal Astronomical Society, 488, 401, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stz1722 NANOGrav 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5-year Continuous Wave Limits 19 Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Caballero, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Guo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, MNRAS, 508, 4970, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stab2833 Cordes, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Lazio, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2002, arXiv e-prints, astro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org/abs/astro-ph/0207156 Deller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Goss, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Brisken, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, ApJ, 875, 100, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab11c7 Demorest, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ferdman, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Gonzalez, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, ApJ, 762, 94, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/762/2/94 Desvignes, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Caballero, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lentati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, MNRAS, 458, 3341, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stw483 Detweiler, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1979, ApJ, 234, 1100, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/157593 Dickey, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1971, The Annals of Mathematical Statistics, 42, 204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='jstor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org/stable/2958475 Ding, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Deller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Freire, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, ApJ, 896, 85, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab8f27 Ellis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & van Haasteren, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1037579 Ellis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Siemens, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Creighton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2012, ApJ, 756, 175, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/756/2/175 Ellis, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Vallisneri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, ENTERPRISE: Enhanced Numerical Toolbox Enabling a Robust PulsaR Inference SuitE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' http://ascl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='net/1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='015 Falxa, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Babak, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Chalumeau, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep Folkner, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Park, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018, Planetary ephemeris DE438 for Juno, Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' IOM 392R-18-004, Jet Propulsion Laboratory, Pasadena, CA Foster, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Backer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1990, ApJ, 361, 300, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/169195 Freire, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Wex, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Esposito-Far`ese, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2012, MNRAS, 423, 3328, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='21253.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Goncharov, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Shannon, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Reardon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, ApJL, 917, L19, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/2041-8213/ac17f4 G´orski, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hivon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Banday, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2005, ApJ, 622, 759, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/427976 Graham, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Djorgovski, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Stern, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015, MNRAS, 453, 1562, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stv1726 Guillemot, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Smith, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Laffon, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, A&A, 587, A109, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1051/0004-6361/201527847 Hazboun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Romano, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Smith, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019a, The Journal of Open Source Software, 4, 1775, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='21105/joss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01775 Hazboun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Romano, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Smith, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019b, PhRvD, 100, 104028, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='104028 Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013a, Classical and Quantum Gravity, 30, 224007, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0264-9381/30/22/224007 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013b, Classical and Quantum Gravity, 30, 224007, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0264-9381/30/22/224007 Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Edwards, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Manchester, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2006, MNRAS, 369, 655, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='10302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Hotan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bailes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Ord, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2006, MNRAS, 369, 1502, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='10394.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Hu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', D’Orazio, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Haiman, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, MNRAS, 495, 4061, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/staa1312 Hunter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2007, Computing in Science & Engineering, 9, 90, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1109/MCSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='55 Iguchi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Okuda, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Sudou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2010, ApJL, 724, L166, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/2041-8205/724/2/L166 Islo, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Burke-Spolaor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Siemens, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, arXiv e-prints, arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11936.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org/abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11936 Jacoby, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hotan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bailes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ord, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Kulkarni, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2005, ApJL, 629, L113, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/449311 Jenet, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lommen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Larson, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Wen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2004, ApJ, 606, 799, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/383020 Jennings, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Kaplan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Chatterjee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cordes, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Deller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018, ApJ, 864, 26, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/aad084 Jones, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', McLaughlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, ApJ, 841, 125, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/aa73df Kass, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Raftery, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1995, Journal of the American Statistical Association, 90, 773, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1080/01621459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='10476572 Kelley, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Blecha, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hernquist, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018, MNRAS, 477, 964, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/sty689 Kerr, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Reardon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, PASA, 37, e020, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1017/pasa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='11 Lam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cordes, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Chatterjee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, ApJ, 821, 66, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/0004-637X/821/1/66 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, ApJ, 834, 35, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/834/1/35 Lazaridis, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Wex, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Jessner, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2009, MNRAS, 400, 805, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15481.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Lehto, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Valtonen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1996, ApJ, 460, 207, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/176962 Lentati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Mingarelli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015, MNRAS, 453, 2576, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stv1538 Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Vigeland, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, ApJ, 921, 178, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ac1da9 Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Gezari, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ayers, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, ApJ, 884, 36, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab40cb L¨ohmer, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Kramer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Driebe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2004, A&A, 426, 631, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1051/0004-6361:20041031 Lommen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Backer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2001, ApJ, 562, 297, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/323491 Lommen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Kipphorn, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Nice, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2006, ApJ, 642, 1012, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/501067 20 The NANOGrav Collaboration Luo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ransom, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Demorest, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, PINT: High-precision pulsar timing analysis package, Astrophysics Source Code Library, record ascl:1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' http://ascl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='net/1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='007 Lutz, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Kelker, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1973, PASP, 85, 573, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/129506 McLaughlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, Classical and Quantum Gravity, 30, 224008, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0264-9381/30/22/224008 Middleton, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, MNRAS, 502, L99, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnrasl/slab008 Mingarelli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, ChiaraMingarelli/nanohertz GWs: First release!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='0, Zenodo, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='838712 Mingarelli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lazio, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, Nature Astronomy, 1, 886, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1038/s41550-017-0299-6 Mundy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Conselice, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Duncan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, MNRAS, 470, 3507, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stx1238 Nice, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Demorest, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Stairs, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015, Tempo: Pulsar timing data analysis, Astrophysics Source Code Library, record ascl:1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' http://ascl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='net/1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='002 Perera, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', DeCesar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Demorest, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, MNRAS, 490, 4666, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stz2857 Phinney, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2001, arXiv e-prints, astro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='org/abs/astro-ph/0108028 Pol, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Kelley, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, ApJL, 911, L34, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/2041-8213/abf2c9 Price-Whelan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sip˝ocz, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', G¨unther, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2018, AJ, 156, 123, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-3881/aabc4f Rosado, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, MNRAS, 439, 3986, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stu254 Rosado, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Gair, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015, MNRAS, 451, 2417, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stv1098 Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, MNRAS, 433, L1, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnrasl/slt034 Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Haardt, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Madau, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Volonteri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2004, ApJ, 611, 623, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/422185 Sesana, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Vecchio, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Colacino, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2008, MNRAS, 390, 192, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='13682.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Shannon, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Cordes, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2010, ApJ, 725, 1607, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/725/2/1607 Shannon, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ravi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Coles, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2013, Science, 342, 334, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1238012 Shannon, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ravi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lentati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2015, Science, 349, 1522, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='aab1910 Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Hazboun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' in prep Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Polin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lommen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, ApJ, 784, 60, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/784/1/60 Splaver, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Nice, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Stairs, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lommen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Backer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2005, ApJ, 620, 405, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/426804 Stovall, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lynch, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Ransom, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, ApJ, 791, 67, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/791/1/67 Sudou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Iguchi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Murata, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Taniguchi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2003, Science, 300, 1263, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1082817 Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Baker, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hazboun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Vigeland, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, enterprise extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='com/nanograv/enterprise extensions Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Huerta, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Gair, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & McWilliams, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016, ApJ, 817, 70, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/0004-637X/817/1/70 Toscano, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Britton, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Manchester, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 1999, ApJL, 523, L171, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1086/312276 Towns, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Cockerill, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Dahan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, Computing in Science & Engineering, 16, 62, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1109/MCSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='80 Vallisneri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, libstempo: Python wrapper for Tempo2, Astrophysics Source Code Library, record ascl:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' http://ascl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='net/2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='017 Vallisneri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2020, ApJ, 893, 112, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ab7b67 van Haasteren, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Levin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Janssen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2011, MNRAS, 414, 3117, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='18613.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Verbiest, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Weisberg, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Chael, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Lorimer, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2012, ApJ, 755, 39, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1088/0004-637X/755/1/39 Verbiest, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bailes, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Coles, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2009, MNRAS, 400, 951, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='15508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Verbiest, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lentati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016a, MNRAS, 458, 1267, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stw347 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2016b, MNRAS, 458, 1267, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stw347 Vigeland, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Vallisneri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, MNRAS, 440, 1446, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stu312 Witt, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Charisi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Taylor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Burke-Spolaor, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2022, ApJ, 936, 89, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ac8356 Xin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Mingarelli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Hazboun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, ApJ, 915, 97, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/ac01c5 Xue, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Bian, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Shu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2021, PhRvL, 127, 251303, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='251303 Yao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Manchester, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', & Wang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2017, ApJ, 835, 29, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='3847/1538-4357/835/1/29 Yardley, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Jenet, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2010, MNRAS, 407, 669, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1365-2966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='16949.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='x Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Hobbs, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Wen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2014, MNRAS, 444, 3709, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='1093/mnras/stu1717 Zonca, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Singer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', Lenz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content=' 2019, Journal of Open Source Software, 4, 1298, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='21105/joss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} +page_content='01298' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/SNE2T4oBgHgl3EQfCAY8/content/2301.03608v1.pdf'} diff --git a/SdE2T4oBgHgl3EQfswgq/vector_store/index.pkl b/SdE2T4oBgHgl3EQfswgq/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..fb48e5bd64ed959dcd18922991c1cbf80ee42ca1 --- /dev/null +++ b/SdE2T4oBgHgl3EQfswgq/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d79dc3aa501825934e3f7e4420f8505e7033d431de8230f171ca97bd9c82a9f +size 351757 diff --git a/T9E5T4oBgHgl3EQfAg7p/content/2301.05380v1.pdf b/T9E5T4oBgHgl3EQfAg7p/content/2301.05380v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..db7165a95d15d9897b3674803d59f9bb0e8f885f --- /dev/null +++ b/T9E5T4oBgHgl3EQfAg7p/content/2301.05380v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af1eb07115861e07bab90c0701b830110d36193619e7d3d4f8195bc2a6abc294 +size 269909 diff --git a/T9E5T4oBgHgl3EQfAg7p/vector_store/index.faiss b/T9E5T4oBgHgl3EQfAg7p/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..abae5034256f91da95d507a19951ffa46a92c1ba --- /dev/null +++ b/T9E5T4oBgHgl3EQfAg7p/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e692b6508352c899a992ed80aed1634fe77031eda13b8c650963e9bdb0e0dfe +size 3932205 diff --git a/T9E5T4oBgHgl3EQfAg7p/vector_store/index.pkl b/T9E5T4oBgHgl3EQfAg7p/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..85f24468d26d45008f00b9d513f73bc13639bb6e --- /dev/null +++ b/T9E5T4oBgHgl3EQfAg7p/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e9f8616e52c630c0cb1fe382ace7a229545f61a329de40dbef7e2fab859db55 +size 135276 diff --git a/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/2301.02710v1.pdf.txt b/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/2301.02710v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad3b9c269b31e4250e3cfe02eb113ac771ae936e --- /dev/null +++ b/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/2301.02710v1.pdf.txt @@ -0,0 +1,1108 @@ +Astronomy & Astrophysics manuscript no. main_texlive2020 +©ESO 2023 +January 10, 2023 +Using photometric redshift data to improve the detection of +galactic filaments with the Bisous model +M. M. Muru1 and E. Tempel1, 2 +1 Tartu Observatory, University of Tartu, Observatooriumi 1, 61602 Tõravere, Estonia +e-mail: moorits.mihkel.muru@ut.ee +2 Estonian Academy of Sciences, Kohtu 6, 10130 Tallinn, Estonia +January 10, 2023 +ABSTRACT +Context. Filament finders are limited, among other things, by the abundance of spectroscopic redshift data. This limits the sky areas +and depth where we can detect the filamentary network. +Aims. As there are proportionally more photometric redshift data than spectroscopic, we aim to use data with photometric redshifts +to improve and expand the areas where we can detect the large-scale structure of the Universe. The Bisous model is a filament finder +that uses only the galaxy positions. We present a proof of concept, showing that the Bisous filament finder can improve the detected +filamentary network with photometric redshift data. +Methods. We created mock data from the MultiDark-Galaxies catalogue. Galaxies with spectroscopic redshifts were given exact +positions from the simulation. Galaxies with photometric redshifts were given uncertainties along one coordinate. The errors were +generated with different Gaussian distributions for different samples. We sample the photometric galaxy positions for each Bisous +run based on the uncertainty distribution. In some runs, the sampled positions are closer to the true positions and produce persistent +filaments; other runs produce noise, which is suppressed in the post-processing. +Results. There are three different types of samples: spectroscopic only, photometric only, and mixed samples of galaxies with pho- +tometric and spectroscopic redshifts. In photometric-only samples, the larger the uncertainty for photometric redshifts, the fewer +filaments are detected, and the filaments strongly align along the line of sight. Using mixed samples improves the number of filaments +detected and decreases the alignment bias of those filaments. The results are compared against the full spectroscopic sample. The +recall for photometric-only samples depends heavily on the size of uncertainty and dropped close to 20%; for mixed samples, the +recall stayed between 40% and 80%. The false discovery rate stayed below 5% in every sample tested in this work. Mixed samples +showed better results than corresponding photometric-only or spectroscopic-only samples for every uncertainty size and number of +spectroscopic galaxies in mixed samples. +Conclusions. Mixed samples of galaxies with photometric and spectroscopic redshifts help us to improve and extend the large-scale +structure further than possible with only spectroscopic samples. Although the uncertainty sizes tested in this work are smaller than +those for the available photometric data, upcoming surveys, such as J-PAS, will achieve sufficiently small uncertainties to be useful +for large-scale structure detection. +Key words. methods: data analysis – methods: statistical – galaxies: statistics – large-scale structure of the Universe +1. Introduction +The galaxy distribution in the observable Universe is not homo- +geneous but has a structure that is dictated by matter distribution +and gravitational forces. The large-scale structure of the Uni- +verse defines the environment galaxies reside in and has a wide +range of effects on the properties of those galaxies; for example, +the orientation of galaxies in relation to the filaments (Lee & +Pen 2000; Aragón-Calvo et al. 2007; Tempel & Libeskind 2013; +Ganeshaiah Veena et al. 2019; Kraljic et al. 2020), the satellite +distribution around larger galaxies (Knebe et al. 2004; Zentner +et al. 2005; Tempel et al. 2015; Wang et al. 2020), the elliptical- +to-spiral ratio, and the star formation rate (Alpaslan et al. 2015; +Kuutma et al. 2017). +Usually, the large-scale structure is divided into four types +of substructures (Libeskind et al. 2018). The densest and most +compact are galaxy clusters that host many gravitationally bound +galaxies and are called knots in the large-scale structure context. +The clusters are connected by chains of galaxies called filaments +that populate the intricate cosmic web. Between clusters and fila- +ments are large under-dense volumes named voids encapsulated +in sheets of filaments called walls or sheets. +There are many different approaches to detecting the differ- +ent large-scale structure elements. Usually, the methods use ei- +ther the relative positions of the galaxies themselves or different +scalar and tensor fields derived from galaxy positions and prop- +erties from observational surveys or simulation data. For exam- +ple, the NEXUS+ model (Cautun et al. 2013) uses the Hessian +of the shear tensor field. Models that use galaxy positions also +employ different approaches. For example, DisPerSE (Sousbie +2011) uses mass estimates and identifies the cosmic web us- +ing topological features of the mass distribution, and the Bisous +model (Tempel et al. 2016) uses the distribution of the galax- +ies and marked point process with interactions. Libeskind et al. +(2018) gives an overview and a brief comparison of 12 different +methods to detect the large-scale structure elements. +The accuracy of these finders depends on the completeness +and accuracy of the data. The best results are obtained from the +simulations where galaxy positions and properties are accurate +and complete phase-space information is available. When using +Article number, page 1 of 10 +arXiv:2301.02710v1 [astro-ph.CO] 6 Jan 2023 + +A&A proofs: manuscript no. main_texlive2020 +data from surveys where the data are incomplete and have un- +certainties, and phase-space information is derived from those +observations, the resulting cosmic web maps deteriorate. Some +methods are better suited for observational data, but all meth- +ods are limited by the completeness and accuracy of spectro- +scopic redshift data. The current largest spectroscopic survey +is the Sloan Digital Sky Survey (SDSS, Eisenstein et al. 2011; +Alam et al. 2015), which covers 7221 deg2 of the sky. There are +upcoming large spectroscopic surveys such as the 4-metre Multi- +Object Spectroscopic Telescope (4MOST, de Jong et al. 2019) +surveys and the Dark Energy Spectroscopic Instrument (DESI, +Dey et al. 2019) Bright Galaxy Survey (BGS, Ruiz-Macias et al. +2021). Future surveys will cover larger areas but will still be lim- +ited by depth and completeness. +Data with photometric redshifts are much more abundant +than the spectroscopic counterpart, as redshifts can be measured +in bulk. For example, SDSS Data Release 12 has 100 times more +photometric redshifts than spectroscopic ones (Beck et al. 2016). +The upcoming J-PAS (Benitez et al. 2014) will observe the sky +in 54 narrowband and three broadband filters and is designed +to measure the redshifts for a large number of galaxies with a +precision of σz ≲ 0.003(1 + z). This precision is comparable to +low-resolution spectroscopic surveys and enables wider use of +photometric redshift data for applications that require positions +of galaxies, such as large-scale structure detection. +In this paper, we use the Bisous filaments finder, which is de- +veloped to detect filaments from observational data. The Bisous +model only needs the galaxy distribution and uses geometric +methods and the marked point process with interactions to de- +tect the cosmic web (Tempel et al. 2014, 2016). Bisous has +been successfully used in many works, such as Nevalainen et al. +(2015), Kuutma et al. (2017), Ganeshaiah Veena et al. (2019), +and Tuominen et al. (2021). Kruuse et al. (2019) show a signifi- +cant positive correlation between the distribution of photometric +galaxies and the Bisous filaments, which suggests that the Bisous +model could be able to use photometric data to improve the de- +tection of filaments. +In this study, we present a proof of concept that data with +photometric redshifts can be used to improve the detection of the +filamentary network. For this, we take a simple approach to use +data with significant uncertainties in position along one axis with +the Bisous model. We generate mock data with photometric and +spectroscopic redshifts from a simulation and use samples with +only photometric redshifts, mixed samples of photometric and +spectroscopic redshifts, and, for comparison and benchmarking, +also samples with only spectroscopic redshifts. Using Bisous re- +sults from the full spectroscopic redshift data as a reference, we +study the recall and false discovery rate of the Bisous runs on +different samples. Further aspects of interest are whether or not +using data with photometric redshift produces biases in the fil- +aments and the maximum size of uncertainties that Bisous can +handle while still improving the filamentary network. +The structure of the paper is as follows. In Sect. 2, we de- +scribe the simulation we used to create the mock data and sam- +ples in this study. In Sect. 3, we describe the Bisous filament +finder and our method to use data with photometric redshifts. In +Sect. 4, we present the results from different samples. A discus- +sion of the results, problems, possible improvements, and future +applications is presented in Sect. 5 and conclusions are outlined +in Sect. 6. +2. Data +2.1. Simulation data +The analysis in this paper is based on simulated mock data. For +the mock data set, we used the galaxy catalogue MultiDark- +Galaxies which is based on the MultiDark-Planck 2 (MDPL21, +Klypin et al. 2016) simulation with the Sag semi-analytic model +for galaxies described in Knebe et al. (2018). The MDPL2 sim- +ulation is based on a dark-matter-only flat Λ cold dark matter +(ΛCDM) model with Planck cosmological parameters: Ωm = +0.307, ΩB = 0.048, ΩΛ = 0.693, σ8 = 0.823, ns = 0.96, +and h = 0.678 (Planck Collaboration et al. 2016). The box size +is 1000 h−1 Mpc (1475.6 Mpc) with 38403 particles with a mass +resolution of mp = 1.51 × 109 h−1 M⊙ per dark matter particle. +This work uses a smaller box of the whole simulation with a +side of 250 Mpc to have a sufficiently large sample size for sta- +tistical analysis but a sufficiently small volume to limit the calcu- +lation time of the Bisous filament finder (see Section 3) applied +to the data. We used a magnitude limit of -20.0 in the SDSS r- +band to have galaxy number density similar to observations (for +comparison, see Muru & Tempel 2021). This cut leaves us with +181 411 galaxies in a box with a side of 250 Mpc, and the galaxy +number density is 0.0116 Mpc-3. +2.2. Photometric redshift mock data +As the distance measures from spectroscopic surveys are rela- +tively precise, the spectroscopic redshift mock data are simply +data with exact positions from the simulation, but in order to +generate photometric redshift mock data we have to introduce +photometric redshift uncertainties to them. The simulation data +positions form a cube for which we take two axes to represent +the sky plane, and the coordinates represent the sky coordinates +and therefore have no extra uncertainty, and one axis represents +the line of sight. We added a random error to the line of sight +coordinate of each galaxy. For simplicity, all the coordinates are +given in megaparsecs (Mpc), and the errors do not scale with +distance. +The random errors for the line-of-sight axis are generated +with a Gaussian distribution (N(x, σ2)) with different standard +deviation (σ) values for different samples. Within one sample, +the standard deviation value is constant. For this study, we used +six different standard deviation values of 1 Mpc, 2 Mpc, 3 Mpc, +5 Mpc, 7 Mpc, and 10 Mpc. We also created mixed samples of +galaxies with spectroscopic and photometric distances in dif- +ferent proportions and with different photometric uncertainties. +This is to emulate a realistic situation where one would start with +an observational catalogue of spectroscopic targets and include +photometric targets to improve the detection of the large-scale +structure. Different mixed samples have 10 %, 20 %, 30 %, 40 %, +and 50 % of the brightest galaxies as spectroscopic galaxies; the +rest are photometric galaxies with uncertainties generated with +σ = 5 Mpc or 10 Mpc. This means that a chosen percentage +of the brightest galaxies have exact positions and other galax- +ies have photometric uncertainties in the line of sight axis, such +as distance. +Figure 1 shows the comparison between a sample of +galaxies with no uncertainties (all spectroscopic redshifts), +and two samples of galaxies with photometric redshifts +with uncertainties with distributions N(0 Mpc, (5 Mpc)2) and +N(0 Mpc, (10 Mpc)2). The leftmost plot shows a visible web- +like structure. In the middle plot, the structure is more diffuse +1 https://www.cosmosim.org/cms/simulations/mdpl2/ +Article number, page 2 of 10 + +M. M. Muru and E. Tempel: Using photometric redshift data with the Bisous model +because of the added randomness along the z-axis, but some of +the original structure is still somewhat visible. In the rightmost +plot, the original structure is no longer visible, but rather seems +to have filamentary structures along the z-axis that have been +produced by the added random errors to galaxy positions along +the z-axis. +2.3. Samples +We use the following notation to name the samples. The fraction +of galaxies in the samples with spectroscopic distance estimates +is denoted with sXX, where XX is a number indicating the per- +centage from the whole sample. The spectroscopic galaxies are +always the brightest galaxies in the sample. For example, s40 +means the sample contains 40% of the brightest galaxies from +the whole sample, all of these have exact distances, and is miss- +ing the other 60% of the galaxies. The photometric samples are +denoted with σYY, where YY is a number indicating the size +of the uncertainties for photometric distance estimates. For ex- +ample, σ5 means the sample contains galaxies that have uncer- +tainties in the distance measures that are generated with Gaus- +sian distribution with a standard deviation of 5 Mpc. For mixed +samples, σ10s30 for example means 30% of the brightest galax- +ies have exact distances (i.e. spectroscopic distance estimates), +and the rest, that is 70% of the galaxies in the sample, have +distances with uncertainties generated with Gaussian distribu- +tion with a standard deviation of 10 Mpc. Table 1 lists samples +used in this work, the distributions used to generate uncertainties +for distances, and the percentages of galaxies with spectroscopic +distances. +For brevity, hereafter the term spectroscopic galaxies/data +is used as a synonym for galaxies/data with spectroscopic red- +shifts, and photometric galaxies/data is used as a synonym for +galaxies/data with photometric redshift. In this work, the former +means data with no uncertainties, and the latter means data with +uncertainties along one axis. +3. The Bisous filament finder +We used the Bisous filament finder to detect the filaments from +the mock data. This finder is a stochastic tool to identify the +spines of the filaments using the spatial distribution of galax- +ies or haloes (Tempel et al. 2014, 2016). The Bisous has already +been applied to a variety of data and has been proven to give +similar results to other filament finders (Libeskind et al. 2018). +We give a short overview of the method below. +First, the Bisous randomly populates the volume with points +with parameters (called marked points), where each point repre- +sents the centre of a cylinder and the parameters give the size and +orientation of the cylinder. The cylinder’s width is about 1 Mpc, +which defines the width of the detected filaments. This width +is derived from the gradient of the galaxy density, where there +is a peak at approximately 0.5 Mpc from the filament’s spine. +Each configuration of cylinders in the volume has a defined en- +ergy, which depends on the position of the cylinders in relation +to the underlying data of haloes and the interconnectedness of +the filamentary network made up of the cylinders. Using the +Metropolis-Hastings algorithm and the simulated annealing pro- +cedure, the Bisous model optimises the energy function of the +system by suggesting random moves to add, remove, or change +the cylinders. +The data of the cylinder configurations are collected over +hundreds of thousands of cycles, each consisting of tens of thou- +sands of moves, which is the basis for visit map calculations. +Table 1. Photometric distance uncertainties and percentage of spec- +troscopic galaxies in each sample. The distance uncertainties column +shows the Gaussian distribution used to generate uncertainties for dis- +tances of photometric galaxies. The last column shows the percentage of +the brightest galaxies with spectroscopic distances, i.e. exact distances. +Samples that do not have galaxies with photometric distances are indi- +cated with an em dash (—) in the second column. +Name +Distance uncertainties +Spectroscopic +distances +σ0 +— +100% +σ1 +N(0 Mpc, (1 Mpc)2) +0% +σ2 +N(0 Mpc, (2 Mpc)2) +0% +σ3 +N(0 Mpc, (3 Mpc)2) +0% +σ5 +N(0 Mpc, (5 Mpc)2) +0% +σ7 +N(0 Mpc, (7 Mpc)2) +0% +σ10 +N(0 Mpc, (10 Mpc)2) +0% +σ5s50 +N(0 Mpc, (5 Mpc)2) +50% +σ5s40 +N(0 Mpc, (5 Mpc)2) +40% +σ5s30 +N(0 Mpc, (5 Mpc)2) +30% +σ5s20 +N(0 Mpc, (5 Mpc)2) +20% +σ5s10 +N(0 Mpc, (5 Mpc)2) +10% +σ10s50 +N(0 Mpc, (10 Mpc)2) +50% +σ10s40 +N(0 Mpc, (10 Mpc)2) +40% +σ10s30 +N(0 Mpc, (10 Mpc)2) +30% +σ10s20 +N(0 Mpc, (10 Mpc)2) +20% +σ10s10 +N(0 Mpc, (10 Mpc)2) +10% +s50 +— +50% +s40 +— +40% +s30 +— +30% +In general, one realisation of cylinders in the volume represents +the detected filamentary network. As the model is stochastic, the +configuration of cylinders changes from realisation to realisa- +tion. The combination of many realisations allows us to define +the visit map that describes the detected filamentary network. +Each coordinate has a defined visit map value, ranging from +0 to 1. The visit map contains information on how often a coor- +dinate in space was ‘visited’ by a cylinder, which signifies how +probable it is that a random realisation has a cylinder at that po- +sition. +To decrease the effects of Poisson noise, the Bisous model +is run many times, usually 50-100. This increases the signal-to- +noise ratio as a larger number of independent realisations are +combined to obtain the resulting maps. +Muru & Tempel 2021 show how the galaxy number density +affects the detected filamentary network. These authors show +that the Bisous method underestimates the extent of the filamen- +tary network rather than giving false-positive results. This means +that the filament finder underestimates the filamentary structures +at higher distances where the galaxy number density drops. To +improve the quality of the detected filamentary network, we need +to increase the galaxy number density, for example, with photo- +metric data. +Using photometric data +Filament finders usually need precise data, either scalar or tensor +fields or galaxy positions, and therefore the less accurate photo- +metric data are ignored. Here we present a method that bene- +fits from photometric data by having higher input data density +and is able to mitigate the uncertainty from distance measures. +Article number, page 3 of 10 + +A&A proofs: manuscript no. main_texlive2020 +x (Mpc) +30 +55 +80 +105 +130 +155 +180 +z (Mpc) +50 +75 +100 +125 +150 +175 +200 +σ0 +x (Mpc) +30 +55 +80 +105 +130 +155 +180 +σ5 +x (Mpc) +30 +55 +80 +105 +130 +155 +180 +σ10 +Fig. 1. Projection of galaxy distributions of samples σ0, σ5, σ10 in a slice with a thickness of 10 Mpc. Each dot represents a galaxy. The +photometric uncertainties are parallel to the z-axis, which also defines the line of sight in this work. Only an area of 150 Mpc × 150 Mpc is shown +for visual clarity. For information about samples; see Sect. 2.3. +This subsection gives an overview of a simple method of how +the Bisous filament finder can use photometric data. +For each galaxy with a photometric redshift estimate and its +probability distribution, we generate NR new distance estimates +drawn from the photometric redshift probability distribution. For +the mock data in this paper, we used a Gaussian distribution to +generate the uncertainties, and so we also use the same Gaus- +sian distribution to generate different distance estimates for ev- +ery galaxy. Every Bisous run uses a different distance estimate +for a galaxy with a photometric distance measure. The number +of Bisous runs should be large in order to minimise the Pois- +son noise in the results but also small to minimise the compu- +tational resources used for the model. Usually, there are around +50 to 100 Bisous runs, for this work, we used NR = 80, which +has been shown to give good results in previous works using the +Bisous model. For mixed data sets of spectroscopic and photo- +metric targets, only the photometric ones have different distance +estimates, whereas spectroscopic targets have the same distance +value in every run. +The novelty of this method is that the runs that have more +accurate distance estimates for the photometric galaxies produce +more persistent filaments. Galaxies with inaccurate distance es- +timates generate noise. The Bisous model suppresses the noise +by combining a large number of realisations. The more inaccu- +rate distance estimates there are, the more noise, which means +the Bisous is able to find fewer filaments. This means that un- +certainties still have to be small to produce good results. The +generation of new distance estimates is done separately for each +galaxy. In practice, we can use a different probability distribution +for each galaxy. +4. Results +This work uses three types of samples: spectroscopic, photomet- +ric, and mixed. The primary purpose of spectroscopic-only sam- +ples is to be a reference value for the other two types of samples. +Photometric-only samples show what can be done using only +photometric redshift surveys, and mixed samples show what we +can do by combining the spectroscopic and photometric redshift +surveys, for example in the areas of spectroscopic surveys where +galaxies are sparse or at higher distances where the detection is +less complete. +As mentioned in Section 1, filaments affect the evolution of +galaxies and knowing whether a galaxy is in a filament or not is +useful when studying the galaxy properties. Therefore, one of the +simplest metrics with which to compare the resulting filamentary +network is the fraction of galaxies situated inside filaments. Fig- +ure 2 shows the fraction of galaxies inside filaments for all the +samples used in this work. The sample σ0 is the most complete +sample (galaxy positions without uncertainties but with the same +magnitude limit as other samples), and the fraction of galaxies +in filaments for that sample could be considered as a reference +value for an ideal case. Looking at samples with only photomet- +ric redshift galaxies, we see the expected trend that the larger +the uncertainties for the distance, the fewer galaxies are in fila- +ments. This comes from the fact that the larger the uncertainties, +the fewer filaments the Bisous model is able to detect (cf. Fig. +3) as the structure in the galaxy distribution is less obvious, as +seen from Figure 1. Adding spectroscopic redshift galaxies to +create the mixed samples considerably increases the number of +galaxies in filaments. For example, 33% of the galaxies in σ5 +are in filaments, but when 20% of the brightest galaxies have +spectroscopic redshifts (σ5s20) this fraction rises to 45%, and +with 50% galaxies with spectroscopic redshifts (σ5s50) up to +59% of galaxies are in filaments. This shows that using spec- +troscopic galaxies together with photometric galaxies increas- +ingly improves the detected filamentary network as the number +of spectroscopic galaxies in a sample increases. On the other +hand, when comparing spectroscopic-only samples (e.g. s50 or +s40) to mixed samples (e.g. σ5s50 or σ5s40) where the galaxy +number density is increased with added photometric galaxies, we +can see that the mixed samples have more galaxies in filaments +when compared to spectroscopic samples. This indicates that +adding galaxies with photometric redshifts to increase the num- +ber density of galaxies in the sample helps to improve the de- +tected filamentary network, as it increases the fraction of galax- +ies in filaments and is closer to the reference sample (σ0). +This metric can also be used to compare the results with ob- +servational data, but different filament finders and different fil- +ament definitions give results that are not directly comparable. +For example, Tempel et al. (2014) found that when using the +Bisous model on SDSS data, the fraction of galaxies in filaments +is about 40%, but they use a stricter definition for whether a +galaxy is considered to be in a filament or not. Also, results based +Article number, page 4 of 10 + +M. M. Muru and E. Tempel: Using photometric redshift data with the Bisous model +Samples in increasing order +Fraction of +galaxies in filaments +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +σ0 +σ1 +σ2 +σ3 +σ5 +σ7 +σ10 +σ5s10 +σ5s20 +σ5s30 +σ5s40 +σ5s50 +σ10s10 +σ10s20 +σ10s30 +σ10s40 +σ10s50 +s30 +s40 +s50 +s30 +s40 +s50 +only spectroscopic +only photometric +mixed +Fig. 2. Fraction of galaxies in filaments for different spectroscopic-only, photometric-only, and mixed samples. The samples are ordered so that +the y-axis values of photometric-only and mixed samples are in increasing order. The spectroscopic-only samples are used for reference values +to show the increase in the fraction of galaxies in filaments for mixed samples. The sample s30 is the smallest spectroscopic sample in this study +because smaller samples had too few galaxies to be able to detect the filamentary network. +on observational data are likely missing fainter galaxies that are +present in simulations, which affects the fraction of galaxies in +filaments. +It is a good idea to look at the spatial distribution of filaments +produced by different samples to assess them visually. Figure 3 +shows visit map slices from 12 different samples. The colour in- +dicates the likelihood of a coordinate being inside a filament. +The plot in the upper left corner is the sample we use as ground +truth, the full spectroscopic sample. The vertical axis is parallel +to the axis of photometric uncertainties and emulates the line of +sight. The photometric-only samples in the left column show that +photometric galaxies make it very difficult to detect filaments +perpendicular to the line of sight. Only stretched-out filaments +parallel to the line of sight remain. In the middle and rightmost +columns, mixed samples are used. Including the spectroscopic +galaxies helps detect filaments perpendicular to the line of sight. +But even in the mixed samples, when photometric galaxies dom- +inate, as in the lower rows, the filaments are preferentially par- +allel to the line of sight. This does not mean that filaments are +parallel to the line of sight, but that these are the filaments the +Bisous model is able to detect with the corresponding data. +Figure 3 shows that photometric galaxies, which have large +uncertainties along the line of sight, suppress the detection of fil- +aments perpendicular to the line of sight. To study this effect, we +describe the distribution of angles between filament spines and +the line of sight. These results are shown in Figure 4. Again, the +σ0 sample is the baseline for this work and shows a uniform dis- +tribution of angles. Using photometric-only samples skews the +distribution closer to 1, meaning the filaments are mostly paral- +lel to the line of sight, as is visible from the visit map projections +in Figure 3. Adding spectroscopic galaxies to the samples signif- +icantly reduces the bias of high cosine values in the distributions. +This is also visible in Figure 3, where more filaments are perpen- +dicular to the z-axis in mixed samples. +When using the results obtained with a full spectroscopic +sample σ0 as ground truth, we can compare other results to it +and construct contingency tables called confusion matrices. We +assign a binary label for each coordinate depending on the visit +map value. If the visit map value is equal to or greater than 0.05, +then that coordinate is classified as inside a filament. With each +coordinate labelled, we can assign four kinds of results: true pos- +itive, true negative, false positive, and false negative. To describe +the goodness of the results for sample s, we use two statistics: the +recall +Recalls = TPs +Pσ0 +, +(1) +where TPs is the number of true-positive values in the sample s, +and Pσ0 the number of positive values in the reference sample +σ0; +and the false discovery rate, +False discovery rates = FPs +Ps +, +(2) +where FPs is the number of false-positive values in the sample s, +and Ps the number of all positive values in the sample s, which +includes both the true-positive and false-positive values. Recall +shows the fraction of filaments the model is able to find com- +pared to the filaments present in results obtained with the sam- +ple σ0, which we want to maximise. The false discovery rate +describes the fraction of false filaments in the results, which we +want to minimise. +Figure 5 shows the recall and the false discovery rates for dif- +ferent samples. As expected, the recall decreases monotonically +when photometric uncertainties increase. Using mixed samples +improves the recall even when using small fractions of spec- +troscopic galaxies. For example, this improvement can be seen +when comparing the recalls of σ5 (0.45) and σ5s10 (0.54) or +σ10 (0.27) and σ10s10 (0.40), both mixed samples use only +10% of the spectroscopic galaxies. Using 50% of the spectro- +scopic galaxies boosts the recall above 0.73, which means almost +three-quarters of the original filaments are detected. As seen in +Figure 5, the false discovery rate is below 0.05 for every sample. +This shows that the Bisous model produces only little noise and +false-positive values even with photometric redshift data. +In addition, we ran Bisous on mock data without using the +method described in Sect. 3 and using the samples σ5, σ5s30, +σ10, and σ10s30 as they are. This enables us to compare the +Bisous model results obtained with the method in Sect. 3 with +results obtained with photometric data without doing anything +special to the photometric galaxies and ignoring photometric +redshift errors. Table 2 lists the different statistics introduced in +this section calculated for these Bisous runs. These results are +calculated as a reference and motivation for using the method +Article number, page 5 of 10 + +A&A proofs: manuscript no. main_texlive2020 +z (Mpc) +50 +75 +100 +125 +150 +Photometric-only +σ0 +z (Mpc) +50 +75 +100 +125 +150 +σ2 +z (Mpc) +50 +75 +100 +125 +150 +σ5 +x (Mpc) +30 +55 +80 +105 +130 +z (Mpc) +50 +75 +100 +125 +150 +σ10 +Mixed samples σ=5 Mpc +σ5s50 +σ5s30 +σ5s20 +x (Mpc) +55 +80 +105 +130 +σ5s10 +Mixed samples σ=10 Mpc +σ10s50 +σ10s30 +σ10s20 +x (Mpc) +55 +80 +105 +130 +σ10s10 +Visit map +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +Fig. 3. Projections of maximum visit map values in slices obtained from the Bisous model using different samples. Only a smaller +100 Mpc × 100 Mpc area is shown for visual clarity. The thickness of the slice is 10 Mpc. Usually, a visit map limit of 0.05 is used to clas- +sify whether or not a coordinate is inside a filament. Therefore, everything besides the blue area is likely part of the filamentary network. The +vertical axis (z) is parallel to the axis of the photometric uncertainties, i.e. it emulates the line of sight. The leftmost column shows samples with +only photometric galaxies, the middle column shows mixed samples with medium uncertainties (σ = 5 Mpc) for photometric galaxies, and the +rightmost column shows mixed samples with the larger uncertainties (σ = 10 Mpc). Different rows in the leftmost column have different pho- +tometric uncertainties, and the middle and the rightmost column have different fractions of the brightest galaxies as spectroscopic galaxies. See +Table 1 and Sect. 2.3 for the sample naming convention used here. +Article number, page 6 of 10 + +M. M. Muru and E. Tempel: Using photometric redshift data with the Bisous model +Samples +σ0 +σ1 +σ2 +σ3 +σ5 +σ7 +σ10 +σ5s50 +σ5s40 +σ5s30 +σ5s20 +σ5s10 +σ10s50 +σ10s40 +σ10s30 +σ10s20 +σ10s10 +cos ∠(fil, los) +0.0 +0.5 +1.0 +Fig. 4. Distributions of the cosine of the angle between filament spines (fil) and the line of sight (los). For each sample, there are two plots. The left +one is a bar plot of the quartiles of the distribution, where the black crossbar indicates the second quartile (the median). The right plot is a violin +plot that shows the density curve of the distribution. The horizontal grey line indicates the median value for a uniform distribution. The closer the +distribution gets to value 1, the more filaments are parallel to the line of sight (z-axis in other plots). +Samples +σ1 +σ2 +σ3 +σ5 +σ7 +σ10 +σ5s50 +σ5s40 +σ5s30 +σ5s20 +σ5s10 +σ10s50 +σ10s40 +σ10s30 +σ10s20 +σ10s10 +Recall +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +False Discovery Rate +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +Recall +False Discovery Rate +Fig. 5. Recall and false discovery rates for photometric and mixed samples. All the samples have the same total number of galaxies. The definitions +for recall and false discovery rate are given in Sect. 4. The false discovery rate uses the secondary vertical axis on the right side of the plot. Including +spectroscopic galaxies improves recall but also increases false discovery rates. The false discovery rates are below 5% for every sample. +described in Sect. 3. In comparison to the samples introduced +in Sect. 2.3 these results show significantly worse recall values +and fewer galaxies in filaments. In some cases, the false dis- +covery rate can have better results, but this comes from the fact +that when detecting fewer filaments, there are also fewer false- +positive results and therefore a lower false discovery rate. +These results qualitatively confirm the results from Kruuse +et al. (2019), showing that galaxies with photometric redshifts +are clustered around the Bisous filaments. We show that the +Bisous model can use photometric redshift data to detect the fila- +mentary network without producing significant amounts of false- +positive results. However, when the uncertainties in the distance +measure increase, the model is able to recall fewer filaments. For +example, with sample σ10, the recall is only 0.27, and mostly fil- +aments parallel to the line of sight are detected. Including spec- +troscopic galaxies in the samples considerably improves the re- +call and helps to mitigate the issue with filament alignment in the +detected filamentary network. Results in this work also qualita- +tively follow the results of Muru & Tempel (2021), where they +show how the Bisous filaments depend on the number density +of the galaxies in the input data. In this work, the mixed sam- +ples show a similar trend, and the photometric galaxies boost the +number density of galaxies, although less than the same number +of spectroscopic galaxies would. +5. Discussion +Previous works applied the Bisous model to SDSS, which is +a spectroscopic survey, and compiled a catalogue of filaments +(Tempel et al. 2014). This work extends the applicability of the +model and demonstrates the effects of using data with photo- +metric redshifts. The major benefit of using photometric red- +shift data comes from its comparatively high availability, and +measurements are made in bulk, and not for single galaxies as +in spectroscopic measurements. The problem with photometric +redshift data is the significantly larger uncertainties when calcu- +Article number, page 7 of 10 + +A&A proofs: manuscript no. main_texlive2020 +Table 2. Comparison of Bisous model results with and without using the +method described in Sect. 3. PB in front of the sample name indicates +that the results are obtained with the plain Bisous model. Recall and +false discovery rate are defined by Equations 1 and 2. +Sample +gal in fila +Recall +FDRb +PB σ5 +0.217 +0.296 +0.037 +σ5 +0.329 +0.452 +0.029 +PB σ5s30 +0.398 +0.541 +0.039 +σ5s30 +0.501 +0.679 +0.042 +PB σ10 +0.127 +0.172 +0.039 +σ10 +0.200 +0.276 +0.025 +PB σ10s30 +0.348 +0.473 +0.039 +σ10s30 +0.441 +0.598 +0.042 +Notes. a Fraction of galaxies in filaments; b false discovery rate +lating redshifts, which results in larger uncertainties in distance +measurements. This is problematic for filament finders. +To simulate the large uncertainties in distance measurements, +we used simulation data to create data with added uncertainties. +For simplicity, all the uncertainties are generated with the same +Gaussian distribution for each galaxy. In reality, the uncertainty +depends on many properties, and one of the most relevant is +the magnitude of the galaxy. But the dependence of the uncer- +tainty on the magnitude is different for different surveys. Also, +using a simulation removes any redshift dependence in the data. +In observations, there are two major redshift-dependent effects. +Firstly, the number density of galaxies decreases with redshift +as we are able to detect fewer galaxies on the fainter end, and +this affects the ability of the Bisous model to detect filaments as +shown in Muru & Tempel (2021). Secondly, the precision of the +photometric redshift values for galaxies depends on their actual +distance. These dependencies should be studied in greater depth +when concentrating on specific surveys and is outside the scope +of this study. +The method we use to overcome this problem of large un- +certainties is straightforward. Essentially, we are just guessing +the true positions. Each galaxy gets 80 different random posi- +tions based on the uncertainties of the redshift estimate. The the- +ory behind this approach is that while random inaccurate posi- +tions produce noise, the positions close to the true position of +the galaxy produce a strong enough signal to be above the noise +level. Regardless of its simplicity, the method shows consider- +able improvements over results when not using this method (see +Table 2). +Although this simple method improves the results, the prob- +lems introduced by using the photometric redshift data are still +prevalent. Using photometric-only redshift data (σXX samples) +results in part of the signal being lost and an incomplete fila- +mentary network. This is visible from the recall values when +compared against σ0 (Fig. 5), the fraction of galaxies in fila- +ments (Fig. 2), and the projections of visit map values (Fig. 3). +Another problem is that with larger uncertainties for distances, +the filaments perpendicular to the line of sight are almost impos- +sible to detect. This creates a strong bias for filaments parallel to +the line of sight (cf Fig. 4). It is important to note that the false +discovery rate (cf Fig. 5) decreases when data with larger un- +certainties are used. This is because galaxies with larger uncer- +tainties produce less meaningful signals, and therefore there will +be fewer filaments in the results, which also means fewer false- +positive filaments. Low false discovery rate values are good be- +cause they demonstrate the robustness of the results. The model +rather outputs fewer filaments than false-positive filaments. +All of the aforementioned problems are reduced by using +mixed samples of spectroscopic and photometric redshift data +instead only photometric, as shown in Section 4. Figure 2 also +shows that using mixed samples to boost the galaxy number den- +sity is better than only using the spectroscopic redshift galaxies. +This could be useful, for example, in the more distant areas of +spectroscopic surveys, where galaxies with spectroscopic red- +shifts are too sparse to use for the detection of the large-scale +structure. Using mixed data could help us extend the area where +we can reliably detect the filaments. +Still, this method requires photometric redshift data with rel- +atively small uncertainties, which are not usually achieved by +photometric surveys. Unfortunately, all current photometric sur- +veys have unusably large uncertainties for the redshifts, but there +will be some new surveys with suitable accuracy in the near fu- +ture. One prominent candidate for photometric redshift data is +the upcoming Javalambre Physics of the Accelerating Universe +Astrophysical Survey (J-PAS, Benitez et al. 2014; Bonoli et al. +2020; Laur et al. 2022). J-PAS is designed to measure the po- +sitions and redshifts of 14 million galaxies. And the estimated +precision for the photometric redshift for galaxies in the red- +shift range 0.1 < z < 1.2 is σz ≲ 0.003(1 + z). For exam- +ple, when using SDSS, the spectroscopic redshift galaxy number +density is high enough to detect some filaments up to a distance +of 400 Mpc, which is approximately z = 0.1 (Tempel et al. 2014; +Muru & Tempel 2021). For this distance, the precision of the +redshifts is σz ≲ 0.003 × 1.1 ≈ 14 Mpc. This is the same order +of magnitude as the σ10 samples used in this work. We expect +the uncertainties to be smaller for brighter galaxies. We aim to +apply the Bisous model to J-PAS data when it is released and +compile a catalogue of filaments. To obtain the mixed data of +photometric and spectroscopic redshift galaxies, we plan to use +the Sloan Digital Sky Survey (SDSS) (Alam et al. 2015) and the +Dark Energy Spectroscopic Instrument (DESI) Bright Galaxy +Survey (BGS) (Dey et al. 2019; Ruiz-Macias et al. 2021). +Although this study is based on the Bisous filament finder, it +is likely that the general tendencies when using data with pho- +tometric redshifts are similar with other filament finders. Using +photometric data will decrease the effectiveness of the filament +finder, and filaments parallel to the line of sight are more likely +to be detected. It is uncertain whether using mixed data of pho- +tometric and spectroscopic redshifts improves the results com- +pared to using only spectroscopic data when using other filament +finders. Also, the false discovery rates might have different val- +ues for other filament finders. The advantage of the Bisous model +is that it models the underlying filamentary network, and galax- +ies are only used to constrain the model properties. Hence, in the +Bisous filament finder, it is straightforward to combine spectro- +scopic and photometric samples. While fixing the scale of the +filaments in the Bisous model, we are free from smoothing the +galaxy distribution, and the Bisous model is able to detect fila- +ments with a specified scale regardless of the galaxy density. +As mentioned in Sect. 1, one common application for fila- +ments is to study the alignment of galaxies and their host fila- +ments. This means that obtaining the accurate filament orienta- +tion from the data is instrumental. In future studies, we aim to +improve the Bisous model to reduce the alignment bias of fila- +ments when using data with photometric redshifts. +Article number, page 8 of 10 + +M. M. Muru and E. Tempel: Using photometric redshift data with the Bisous model +6. Conclusions +Filament finders are limited, among other things, by the abun- +dance of spectroscopic redshift data. This limits the sky areas +and depth where we can detect the filamentary network. As pho- +tometric redshift data can be obtained on shorter timescales, +because you can observe many objects simultaneously, there +are many more photometric redshift data available. We present +a method that enables the Bisous filament finder to use data +with considerable uncertainties in one coordinate; for example +photometric redshift data. We use MultiDark-Galaxies, a dark +matter-only simulation with semi-analytical galaxies, to gener- +ate the data for analysis. Spectroscopic redshift data are simply +the exact positions of galaxies from the simulation, and photo- +metric redshift galaxies have added random error with Gaussian +distribution in one axis, where this latter represents the line of +sight. This work uses three types of samples. Firstly, spectro- +scopic samples with different magnitude cuts for reference val- +ues for other samples. Secondly, photometric samples using dif- +ferent standard deviations from σ = 1 Mpc to 10 Mpc to gener- +ate the errors with different sizes for distances. Thirdly, mixed +samples, where in different samples 10 % to 50 % of the bright- +est galaxies have spectroscopic redshifts, that is, they have exact +distance measurements, and other galaxies have distances with +uncertainties. An overview of the samples used in this work is +given in Sect. 2.3. +The Bisous model uses a marked point process to fit cylinder- +like objects to the underlying galaxy distribution and optimises +the distribution of objects based on the galaxy distribution and +the interconnectedness of the cylinder network. To use the pho- +tometric redshift data with uncertainties along one axis, we mod- +ified the coordinates along that axis. Knowing the distribution of +the uncertainties for the distance of photometric redshift galax- +ies, we use the same distribution to add a random value to the +distance of a galaxy. For each Bisous run, we generated a new +galaxy distribution, where each photometric redshift galaxy has +a different random value added to its distance based on the uncer- +tainty distribution. Each Bisous model uses 80 runs. The theory +underpinning this approach is that those runs, where some galax- +ies have random distance values that are closer to their true dis- +tances, produce strong signals, and others with scrambled galaxy +distributions just produce noise, which will be removed in the +post-processing. +Using photometric-only samples shows that when uncertain- +ties are very small, Gaussian distribution with σ = 1 Mpc or +2 Mpc, the Bisous model can find most of the same filaments as +in the full spectroscopic sample σ0. Unfortunately, these uncer- +tainties are unachievable for modern or even future planned pho- +tometric surveys. With larger uncertainties in the photometric- +only samples, the ability to recall the filaments in the refer- +ence sample drops below 50%, and the filaments align with the +line of sight. Using mixed samples of photometric and spectro- +scopic data helps to reduce the mentioned problems. For exam- +ple, a comparison between three samples: a spectroscopic-only +sample s30, which uses only 30% of the brightest galaxies, a +photometric-only sample σ10, which uses data with errors gen- +erated with σ = 10 Mpc, and a mixed sample σ10s30, which +uses the same standard deviation (σ = 10 Mpc) for errors and +the same amount of spectroscopic galaxies (30% of the whole +sample). Using the spectroscopic data, which contain only 30% +of the brightest galaxies, results in 36% of galaxies being in- +side filaments. Using only photometric data, which contain all +the galaxies, but have uncertainties in one coordinate, we find +that 20% of galaxies are inside filaments. And finally, using the +mixed data, which contain more data than the spectroscopic sam- +ple and, in contrast to the photometric sample, also incorporate +30% of the spectroscopic data, the Bisous model finds that 40% +of the galaxies are inside filaments. The reference value for these +galaxies and the volume is from the full spectroscopic sample, +which gives a value of 71% of galaxies in filaments. Adding the +spectroscopic galaxies from the sample s30 to the photometric +sample σ10 increases the recall of filaments from 27% to 60%. +This shows that using mixed data is beneficial when spectro- +scopic data are too sparse and photometric data have excessively +large uncertainties to be used without spectroscopic data. +J-PAS is an upcoming photometric survey that is designed +to produce data with sufficiently small uncertainties to be appli- +cable to a method such as the one in this article. The expected +precision of the redshifts is σz ≲ 0.003(1 + z) (Benitez et al. +2014). For a distance of about z = 0.1, this is σz ≲ 0.003 × 1.1 ≈ +14 Mpc, which is close to the values used in this work. The next +step is to apply the Bisous model to J-PAS data once available. +Acknowledgements. We thank the referee for their comments and suggested im- +provements. Part of this work was supported by institutional research funding +PRG1006 of the Estonian Ministry of Education and Research. We acknowledge +the support by the Centre of Excellence "Dark Side of the Universe" (TK133). +Part of this work was carried out in the High-Performance Computing Center of +the University of Tartu (University of Tartu 2018). The CosmoSim database used +in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). +The MultiDark database was developed in cooperation with the Spanish Multi- +Dark Consolider Project CSD2009-00064. The authors gratefully acknowledge +the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Part- +nership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) +for funding the MultiDark simulation project by providing computing time on +the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, +www.lrz.de). The data exploration was done using TOPCAT (Taylor 2005), and +analysis and plotting were done using Julia Language (Bezanson et al. 2017) +and the following packages: DrWatson.jl (Datseris et al. 2020), Pluto.jl (van der +Plas et al. 2022), Makie.jl (Danisch & Krumbiegel 2021), DataFrames.jl (White +et al. 2020), Distributions.jl (Besançon et al. 2021), ColorSchemes.jl, which uses +Scientific colour maps (Crameri 2021). +References +Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, ApJS, 219, 12 +Alpaslan, M., Driver, S., Robotham, A. S. G., et al. 2015, MNRAS, 451, 3249 +Aragón-Calvo, M. A., van de Weygaert, R., Jones, B. J. T., & van der Hulst, J. M. +2007, ApJ, 655, L5 +Beck, R., Dobos, L., Budavári, T., Szalay, A. S., & Csabai, I. 2016, MNRAS, +460, 1371 +Benitez, N., Dupke, R., Moles, M., et al. 2014, arXiv e-prints, arXiv:1403.5237 +Besançon, M., Papamarkou, T., Anthoff, D., et al. 2021, Journal of Statistical +Software, 98, 1 +Bezanson, J., Edelman, A., Karpinski, S., & Shah, V. B. 2017, SIAM Review, +59, 65 +Bonoli, S., Marín-Franch, A., Varela, J., et al. 2020, arXiv e-prints, +arXiv:2007.01910 +Cautun, M., van de Weygaert, R., & Jones, B. J. T. 2013, MNRAS, 429, 1286 +Crameri, F. 2021, Scientific colour maps, The development of the Scientific +colour maps is not funded any longer, but will continue as a pro bono project +for the scientific community. - Fabio +Danisch, S. & Krumbiegel, J. 2021, Journal of Open Source Software, 6, 3349 +Datseris, G., Isensee, J., Pech, S., & Gál, T. 2020, Journal of Open Source Soft- +ware, 5, 2673 +de Jong, R. S., Agertz, O., Berbel, A. A., et al. 2019, The Messenger, 175, 3 +Dey, A., Schlegel, D. J., Lang, D., et al. 2019, AJ, 157, 168 +Eisenstein, D. J., Weinberg, D. H., Agol, E., et al. 2011, AJ, 142, 72 +Ganeshaiah Veena, P., Cautun, M., Tempel, E., van de Weygaert, R., & Frenk, +C. S. 2019, MNRAS, 487, 1607 +Klypin, A., Yepes, G., Gottlöber, S., Prada, F., & Heß, S. 2016, MNRAS, 457, +4340 +Knebe, A., Gill, S. P. D., Gibson, B. K., et al. 2004, ApJ, 603, 7 +Knebe, A., Stoppacher, D., Prada, F., et al. 2018, MNRAS, 474, 5206 +Kraljic, K., Davé, R., & Pichon, C. 2020, MNRAS, 493, 362 +Kruuse, M., Tempel, E., Kipper, R., & Stoica, R. S. 2019, A&A, 625, A130 +Kuutma, T., Tamm, A., & Tempel, E. 2017, A&A, 600, L6 +Article number, page 9 of 10 + +A&A proofs: manuscript no. main_texlive2020 +Laur, J., Tempel, E., Tamm, A., et al. 2022, A&A, 668, A8 +Lee, J. & Pen, U.-L. 2000, ApJ, 532, L5 +Libeskind, N. I., van de Weygaert, R., Cautun, M., et al. 2018, MNRAS, 473, +1195 +Muru, M. M. & Tempel, E. 2021, A&A, 649, A108 +Nevalainen, J., Tempel, E., Liivamägi, L. J., et al. 2015, A&A, 583, A142 +Planck Collaboration, Adam, R., Ade, P. A. R., et al. 2016, A&A, 594, A1 +Ruiz-Macias, O., Zarrouk, P., Cole, S., et al. 2021, MNRAS, 502, 4328 +Sousbie, T. 2011, MNRAS, 414, 350 +Taylor, M. B. 2005, in Astronomical Society of the Pacific Conference Se- +ries, Vol. 347, Astronomical Data Analysis Software and Systems XIV, ed. +P. Shopbell, M. Britton, & R. Ebert, 29 +Tempel, E., Guo, Q., Kipper, R., & Libeskind, N. I. 2015, MNRAS, 450, 2727 +Tempel, E. & Libeskind, N. I. 2013, ApJ, 775, L42 +Tempel, E., Stoica, R. S., Kipper, R., & Saar, E. 2016, Astronomy and Comput- +ing, 16, 17 +Tempel, E., Stoica, R. S., Martínez, V. J., et al. 2014, MNRAS, 438, 3465 +Tuominen, T., Nevalainen, J., Tempel, E., et al. 2021, A&A, 646, A156 +University of Tartu. 2018, UT Rocket +van der Plas, F., Dral, M., Berg, P., et al. 2022, fonsp/Pluto.jl: v0.19.11 +Wang, P., Libeskind, N. I., Tempel, E., et al. 2020, ApJ, 900, 129 +White, +J. +M., +Kami´nski, +B., +powerdistribution, +et +al. +2020, +Julia- +Data/DataFrames.jl: v0.22.1 +Zentner, A. R., Kravtsov, A. V., Gnedin, O. Y., & Klypin, A. A. 2005, ApJ, 629, +219 +Article number, page 10 of 10 + diff --git a/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/load_file.txt b/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..2bad54a95822c7aa3a311dddcae8f55bd0c4e86a --- /dev/null +++ b/TtE0T4oBgHgl3EQf2QKO/content/tmp_files/load_file.txt @@ -0,0 +1,792 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf,len=791 +page_content='Astronomy & Astrophysics manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 ©ESO 2023 January 10, 2023 Using photometric redshift data to improve the detection of galactic filaments with the Bisous model M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru1 and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel1, 2 1 Tartu Observatory, University of Tartu, Observatooriumi 1, 61602 Tõravere, Estonia e-mail: moorits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='mihkel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='muru@ut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='ee 2 Estonian Academy of Sciences, Kohtu 6, 10130 Tallinn, Estonia January 10, 2023 ABSTRACT Context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Filament finders are limited, among other things, by the abundance of spectroscopic redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This limits the sky areas and depth where we can detect the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Aims.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As there are proportionally more photometric redshift data than spectroscopic, we aim to use data with photometric redshifts to improve and expand the areas where we can detect the large-scale structure of the Universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous model is a filament finder that uses only the galaxy positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We present a proof of concept, showing that the Bisous filament finder can improve the detected filamentary network with photometric redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We created mock data from the MultiDark-Galaxies catalogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Galaxies with spectroscopic redshifts were given exact positions from the simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Galaxies with photometric redshifts were given uncertainties along one coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The errors were generated with different Gaussian distributions for different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We sample the photometric galaxy positions for each Bisous run based on the uncertainty distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In some runs, the sampled positions are closer to the true positions and produce persistent filaments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' other runs produce noise, which is suppressed in the post-processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' There are three different types of samples: spectroscopic only, photometric only, and mixed samples of galaxies with pho- tometric and spectroscopic redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In photometric-only samples, the larger the uncertainty for photometric redshifts, the fewer filaments are detected, and the filaments strongly align along the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using mixed samples improves the number of filaments detected and decreases the alignment bias of those filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The results are compared against the full spectroscopic sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The recall for photometric-only samples depends heavily on the size of uncertainty and dropped close to 20%;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' for mixed samples, the recall stayed between 40% and 80%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The false discovery rate stayed below 5% in every sample tested in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Mixed samples showed better results than corresponding photometric-only or spectroscopic-only samples for every uncertainty size and number of spectroscopic galaxies in mixed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Mixed samples of galaxies with photometric and spectroscopic redshifts help us to improve and extend the large-scale structure further than possible with only spectroscopic samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Although the uncertainty sizes tested in this work are smaller than those for the available photometric data, upcoming surveys, such as J-PAS, will achieve sufficiently small uncertainties to be useful for large-scale structure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Key words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' methods: data analysis – methods: statistical – galaxies: statistics – large-scale structure of the Universe 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Introduction The galaxy distribution in the observable Universe is not homo- geneous but has a structure that is dictated by matter distribution and gravitational forces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The large-scale structure of the Uni- verse defines the environment galaxies reside in and has a wide range of effects on the properties of those galaxies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' for example, the orientation of galaxies in relation to the filaments (Lee & Pen 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Aragón-Calvo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel & Libeskind 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Ganeshaiah Veena et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Kraljic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020), the satellite distribution around larger galaxies (Knebe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Zentner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020), the elliptical- to-spiral ratio, and the star formation rate (Alpaslan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Kuutma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Usually, the large-scale structure is divided into four types of substructures (Libeskind et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The densest and most compact are galaxy clusters that host many gravitationally bound galaxies and are called knots in the large-scale structure context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The clusters are connected by chains of galaxies called filaments that populate the intricate cosmic web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Between clusters and fila- ments are large under-dense volumes named voids encapsulated in sheets of filaments called walls or sheets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' There are many different approaches to detecting the differ- ent large-scale structure elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Usually, the methods use ei- ther the relative positions of the galaxies themselves or different scalar and tensor fields derived from galaxy positions and prop- erties from observational surveys or simulation data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For exam- ple, the NEXUS+ model (Cautun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2013) uses the Hessian of the shear tensor field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Models that use galaxy positions also employ different approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, DisPerSE (Sousbie 2011) uses mass estimates and identifies the cosmic web us- ing topological features of the mass distribution, and the Bisous model (Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016) uses the distribution of the galax- ies and marked point process with interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Libeskind et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2018) gives an overview and a brief comparison of 12 different methods to detect the large-scale structure elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The accuracy of these finders depends on the completeness and accuracy of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The best results are obtained from the simulations where galaxy positions and properties are accurate and complete phase-space information is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' When using Article number, page 1 of 10 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='02710v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='CO] 6 Jan 2023 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 data from surveys where the data are incomplete and have un- certainties, and phase-space information is derived from those observations, the resulting cosmic web maps deteriorate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Some methods are better suited for observational data, but all meth- ods are limited by the completeness and accuracy of spectro- scopic redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The current largest spectroscopic survey is the Sloan Digital Sky Survey (SDSS, Eisenstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015), which covers 7221 deg2 of the sky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' There are upcoming large spectroscopic surveys such as the 4-metre Multi- Object Spectroscopic Telescope (4MOST, de Jong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019) surveys and the Dark Energy Spectroscopic Instrument (DESI, Dey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019) Bright Galaxy Survey (BGS, Ruiz-Macias et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Future surveys will cover larger areas but will still be lim- ited by depth and completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Data with photometric redshifts are much more abundant than the spectroscopic counterpart, as redshifts can be measured in bulk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, SDSS Data Release 12 has 100 times more photometric redshifts than spectroscopic ones (Beck et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The upcoming J-PAS (Benitez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014) will observe the sky in 54 narrowband and three broadband filters and is designed to measure the redshifts for a large number of galaxies with a precision of σz ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='003(1 + z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This precision is comparable to low-resolution spectroscopic surveys and enables wider use of photometric redshift data for applications that require positions of galaxies, such as large-scale structure detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In this paper, we use the Bisous filaments finder, which is de- veloped to detect filaments from observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous model only needs the galaxy distribution and uses geometric methods and the marked point process with interactions to de- tect the cosmic web (Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Bisous has been successfully used in many works, such as Nevalainen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2015), Kuutma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2017), Ganeshaiah Veena et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2019), and Tuominen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Kruuse et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2019) show a signifi- cant positive correlation between the distribution of photometric galaxies and the Bisous filaments, which suggests that the Bisous model could be able to use photometric data to improve the de- tection of filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In this study, we present a proof of concept that data with photometric redshifts can be used to improve the detection of the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For this, we take a simple approach to use data with significant uncertainties in position along one axis with the Bisous model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We generate mock data with photometric and spectroscopic redshifts from a simulation and use samples with only photometric redshifts, mixed samples of photometric and spectroscopic redshifts, and, for comparison and benchmarking, also samples with only spectroscopic redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using Bisous re- sults from the full spectroscopic redshift data as a reference, we study the recall and false discovery rate of the Bisous runs on different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Further aspects of interest are whether or not using data with photometric redshift produces biases in the fil- aments and the maximum size of uncertainties that Bisous can handle while still improving the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The structure of the paper is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2, we de- scribe the simulation we used to create the mock data and sam- ples in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3, we describe the Bisous filament finder and our method to use data with photometric redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 4, we present the results from different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' A discus- sion of the results, problems, possible improvements, and future applications is presented in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 5 and conclusions are outlined in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Data 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Simulation data The analysis in this paper is based on simulated mock data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For the mock data set, we used the galaxy catalogue MultiDark- Galaxies which is based on the MultiDark-Planck 2 (MDPL21, Klypin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016) simulation with the Sag semi-analytic model for galaxies described in Knebe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The MDPL2 sim- ulation is based on a dark-matter-only flat Λ cold dark matter (ΛCDM) model with Planck cosmological parameters: Ωm = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='307, ΩB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='048, ΩΛ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='693, σ8 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='823, ns = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='96, and h = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='678 (Planck Collaboration et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The box size is 1000 h−1 Mpc (1475.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='6 Mpc) with 38403 particles with a mass resolution of mp = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='51 × 109 h−1 M⊙ per dark matter particle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This work uses a smaller box of the whole simulation with a side of 250 Mpc to have a sufficiently large sample size for sta- tistical analysis but a sufficiently small volume to limit the calcu- lation time of the Bisous filament finder (see Section 3) applied to the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We used a magnitude limit of -20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 in the SDSS r- band to have galaxy number density similar to observations (for comparison, see Muru & Tempel 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This cut leaves us with 181 411 galaxies in a box with a side of 250 Mpc, and the galaxy number density is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0116 Mpc-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Photometric redshift mock data As the distance measures from spectroscopic surveys are rela- tively precise, the spectroscopic redshift mock data are simply data with exact positions from the simulation, but in order to generate photometric redshift mock data we have to introduce photometric redshift uncertainties to them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The simulation data positions form a cube for which we take two axes to represent the sky plane, and the coordinates represent the sky coordinates and therefore have no extra uncertainty, and one axis represents the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We added a random error to the line of sight coordinate of each galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For simplicity, all the coordinates are given in megaparsecs (Mpc), and the errors do not scale with distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The random errors for the line-of-sight axis are generated with a Gaussian distribution (N(x, σ2)) with different standard deviation (σ) values for different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Within one sample, the standard deviation value is constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For this study, we used six different standard deviation values of 1 Mpc, 2 Mpc, 3 Mpc, 5 Mpc, 7 Mpc, and 10 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We also created mixed samples of galaxies with spectroscopic and photometric distances in dif- ferent proportions and with different photometric uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is to emulate a realistic situation where one would start with an observational catalogue of spectroscopic targets and include photometric targets to improve the detection of the large-scale structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Different mixed samples have 10 %, 20 %, 30 %, 40 %, and 50 % of the brightest galaxies as spectroscopic galaxies;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' the rest are photometric galaxies with uncertainties generated with σ = 5 Mpc or 10 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This means that a chosen percentage of the brightest galaxies have exact positions and other galax- ies have photometric uncertainties in the line of sight axis, such as distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Figure 1 shows the comparison between a sample of galaxies with no uncertainties (all spectroscopic redshifts), and two samples of galaxies with photometric redshifts with uncertainties with distributions N(0 Mpc, (5 Mpc)2) and N(0 Mpc, (10 Mpc)2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The leftmost plot shows a visible web- like structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In the middle plot, the structure is more diffuse 1 https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='cosmosim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='org/cms/simulations/mdpl2/ Article number, page 2 of 10 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel: Using photometric redshift data with the Bisous model because of the added randomness along the z-axis, but some of the original structure is still somewhat visible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In the rightmost plot, the original structure is no longer visible, but rather seems to have filamentary structures along the z-axis that have been produced by the added random errors to galaxy positions along the z-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Samples We use the following notation to name the samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The fraction of galaxies in the samples with spectroscopic distance estimates is denoted with sXX, where XX is a number indicating the per- centage from the whole sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The spectroscopic galaxies are always the brightest galaxies in the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, s40 means the sample contains 40% of the brightest galaxies from the whole sample, all of these have exact distances, and is miss- ing the other 60% of the galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The photometric samples are denoted with σYY, where YY is a number indicating the size of the uncertainties for photometric distance estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For ex- ample, σ5 means the sample contains galaxies that have uncer- tainties in the distance measures that are generated with Gaus- sian distribution with a standard deviation of 5 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For mixed samples, σ10s30 for example means 30% of the brightest galax- ies have exact distances (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' spectroscopic distance estimates), and the rest, that is 70% of the galaxies in the sample, have distances with uncertainties generated with Gaussian distribu- tion with a standard deviation of 10 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Table 1 lists samples used in this work, the distributions used to generate uncertainties for distances, and the percentages of galaxies with spectroscopic distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For brevity, hereafter the term spectroscopic galaxies/data is used as a synonym for galaxies/data with spectroscopic red- shifts, and photometric galaxies/data is used as a synonym for galaxies/data with photometric redshift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In this work, the former means data with no uncertainties, and the latter means data with uncertainties along one axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous filament finder We used the Bisous filament finder to detect the filaments from the mock data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This finder is a stochastic tool to identify the spines of the filaments using the spatial distribution of galax- ies or haloes (Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous has already been applied to a variety of data and has been proven to give similar results to other filament finders (Libeskind et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We give a short overview of the method below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' First, the Bisous randomly populates the volume with points with parameters (called marked points), where each point repre- sents the centre of a cylinder and the parameters give the size and orientation of the cylinder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The cylinder’s width is about 1 Mpc, which defines the width of the detected filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This width is derived from the gradient of the galaxy density, where there is a peak at approximately 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='5 Mpc from the filament’s spine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Each configuration of cylinders in the volume has a defined en- ergy, which depends on the position of the cylinders in relation to the underlying data of haloes and the interconnectedness of the filamentary network made up of the cylinders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using the Metropolis-Hastings algorithm and the simulated annealing pro- cedure, the Bisous model optimises the energy function of the system by suggesting random moves to add, remove, or change the cylinders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The data of the cylinder configurations are collected over hundreds of thousands of cycles, each consisting of tens of thou- sands of moves, which is the basis for visit map calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Photometric distance uncertainties and percentage of spec- troscopic galaxies in each sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The distance uncertainties column shows the Gaussian distribution used to generate uncertainties for dis- tances of photometric galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The last column shows the percentage of the brightest galaxies with spectroscopic distances, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' exact distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Samples that do not have galaxies with photometric distances are indi- cated with an em dash (—) in the second column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Name Distance uncertainties Spectroscopic distances σ0 — 100% σ1 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (1 Mpc)2) 0% σ2 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2 Mpc)2) 0% σ3 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (3 Mpc)2) 0% σ5 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 0% σ7 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (7 Mpc)2) 0% σ10 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 0% σ5s50 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 50% σ5s40 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 40% σ5s30 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 30% σ5s20 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 20% σ5s10 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (5 Mpc)2) 10% σ10s50 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 50% σ10s40 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 40% σ10s30 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 30% σ10s20 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 20% σ10s10 N(0 Mpc,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (10 Mpc)2) 10% s50 — 50% s40 — 40% s30 — 30% In general,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' one realisation of cylinders in the volume represents the detected filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As the model is stochastic, the configuration of cylinders changes from realisation to realisa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The combination of many realisations allows us to define the visit map that describes the detected filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Each coordinate has a defined visit map value, ranging from 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The visit map contains information on how often a coor- dinate in space was ‘visited’ by a cylinder, which signifies how probable it is that a random realisation has a cylinder at that po- sition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To decrease the effects of Poisson noise, the Bisous model is run many times, usually 50-100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This increases the signal-to- noise ratio as a larger number of independent realisations are combined to obtain the resulting maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru & Tempel 2021 show how the galaxy number density affects the detected filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' These authors show that the Bisous method underestimates the extent of the filamen- tary network rather than giving false-positive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This means that the filament finder underestimates the filamentary structures at higher distances where the galaxy number density drops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To improve the quality of the detected filamentary network, we need to increase the galaxy number density, for example, with photo- metric data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using photometric data Filament finders usually need precise data, either scalar or tensor fields or galaxy positions, and therefore the less accurate photo- metric data are ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Here we present a method that bene- fits from photometric data by having higher input data density and is able to mitigate the uncertainty from distance measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Article number, page 3 of 10 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 x (Mpc) 30 55 80 105 130 155 180 z (Mpc) 50 75 100 125 150 175 200 σ0 x (Mpc) 30 55 80 105 130 155 180 σ5 x (Mpc) 30 55 80 105 130 155 180 σ10 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Projection of galaxy distributions of samples σ0, σ5, σ10 in a slice with a thickness of 10 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Each dot represents a galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The photometric uncertainties are parallel to the z-axis, which also defines the line of sight in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Only an area of 150 Mpc × 150 Mpc is shown for visual clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For information about samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' see Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This subsection gives an overview of a simple method of how the Bisous filament finder can use photometric data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For each galaxy with a photometric redshift estimate and its probability distribution, we generate NR new distance estimates drawn from the photometric redshift probability distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For the mock data in this paper, we used a Gaussian distribution to generate the uncertainties, and so we also use the same Gaus- sian distribution to generate different distance estimates for ev- ery galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Every Bisous run uses a different distance estimate for a galaxy with a photometric distance measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The number of Bisous runs should be large in order to minimise the Pois- son noise in the results but also small to minimise the compu- tational resources used for the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Usually, there are around 50 to 100 Bisous runs, for this work, we used NR = 80, which has been shown to give good results in previous works using the Bisous model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For mixed data sets of spectroscopic and photo- metric targets, only the photometric ones have different distance estimates, whereas spectroscopic targets have the same distance value in every run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The novelty of this method is that the runs that have more accurate distance estimates for the photometric galaxies produce more persistent filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Galaxies with inaccurate distance es- timates generate noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous model suppresses the noise by combining a large number of realisations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The more inaccu- rate distance estimates there are, the more noise, which means the Bisous is able to find fewer filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This means that un- certainties still have to be small to produce good results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The generation of new distance estimates is done separately for each galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In practice, we can use a different probability distribution for each galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Results This work uses three types of samples: spectroscopic, photomet- ric, and mixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The primary purpose of spectroscopic-only sam- ples is to be a reference value for the other two types of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Photometric-only samples show what can be done using only photometric redshift surveys, and mixed samples show what we can do by combining the spectroscopic and photometric redshift surveys, for example in the areas of spectroscopic surveys where galaxies are sparse or at higher distances where the detection is less complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As mentioned in Section 1, filaments affect the evolution of galaxies and knowing whether a galaxy is in a filament or not is useful when studying the galaxy properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Therefore, one of the simplest metrics with which to compare the resulting filamentary network is the fraction of galaxies situated inside filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Fig- ure 2 shows the fraction of galaxies inside filaments for all the samples used in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The sample σ0 is the most complete sample (galaxy positions without uncertainties but with the same magnitude limit as other samples), and the fraction of galaxies in filaments for that sample could be considered as a reference value for an ideal case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Looking at samples with only photomet- ric redshift galaxies, we see the expected trend that the larger the uncertainties for the distance, the fewer galaxies are in fila- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This comes from the fact that the larger the uncertainties, the fewer filaments the Bisous model is able to detect (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3) as the structure in the galaxy distribution is less obvious, as seen from Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Adding spectroscopic redshift galaxies to create the mixed samples considerably increases the number of galaxies in filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, 33% of the galaxies in σ5 are in filaments, but when 20% of the brightest galaxies have spectroscopic redshifts (σ5s20) this fraction rises to 45%, and with 50% galaxies with spectroscopic redshifts (σ5s50) up to 59% of galaxies are in filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This shows that using spec- troscopic galaxies together with photometric galaxies increas- ingly improves the detected filamentary network as the number of spectroscopic galaxies in a sample increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' On the other hand, when comparing spectroscopic-only samples (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' s50 or s40) to mixed samples (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' σ5s50 or σ5s40) where the galaxy number density is increased with added photometric galaxies, we can see that the mixed samples have more galaxies in filaments when compared to spectroscopic samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This indicates that adding galaxies with photometric redshifts to increase the num- ber density of galaxies in the sample helps to improve the de- tected filamentary network, as it increases the fraction of galax- ies in filaments and is closer to the reference sample (σ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This metric can also be used to compare the results with ob- servational data, but different filament finders and different fil- ament definitions give results that are not directly comparable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2014) found that when using the Bisous model on SDSS data, the fraction of galaxies in filaments is about 40%, but they use a stricter definition for whether a galaxy is considered to be in a filament or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Also, results based Article number, page 4 of 10 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel: Using photometric redshift data with the Bisous model Samples in increasing order Fraction of galaxies in filaments 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 σ0 σ1 σ2 σ3 σ5 σ7 σ10 σ5s10 σ5s20 σ5s30 σ5s40 σ5s50 σ10s10 σ10s20 σ10s30 σ10s40 σ10s50 s30 s40 s50 s30 s40 s50 only spectroscopic only photometric mixed Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Fraction of galaxies in filaments for different spectroscopic-only, photometric-only, and mixed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The samples are ordered so that the y-axis values of photometric-only and mixed samples are in increasing order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The spectroscopic-only samples are used for reference values to show the increase in the fraction of galaxies in filaments for mixed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The sample s30 is the smallest spectroscopic sample in this study because smaller samples had too few galaxies to be able to detect the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' on observational data are likely missing fainter galaxies that are present in simulations, which affects the fraction of galaxies in filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' It is a good idea to look at the spatial distribution of filaments produced by different samples to assess them visually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Figure 3 shows visit map slices from 12 different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The colour in- dicates the likelihood of a coordinate being inside a filament.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The plot in the upper left corner is the sample we use as ground truth, the full spectroscopic sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The vertical axis is parallel to the axis of photometric uncertainties and emulates the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The photometric-only samples in the left column show that photometric galaxies make it very difficult to detect filaments perpendicular to the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Only stretched-out filaments parallel to the line of sight remain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In the middle and rightmost columns, mixed samples are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Including the spectroscopic galaxies helps detect filaments perpendicular to the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' But even in the mixed samples, when photometric galaxies dom- inate, as in the lower rows, the filaments are preferentially par- allel to the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This does not mean that filaments are parallel to the line of sight, but that these are the filaments the Bisous model is able to detect with the corresponding data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Figure 3 shows that photometric galaxies, which have large uncertainties along the line of sight, suppress the detection of fil- aments perpendicular to the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To study this effect, we describe the distribution of angles between filament spines and the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' These results are shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Again, the σ0 sample is the baseline for this work and shows a uniform dis- tribution of angles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using photometric-only samples skews the distribution closer to 1, meaning the filaments are mostly paral- lel to the line of sight, as is visible from the visit map projections in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Adding spectroscopic galaxies to the samples signif- icantly reduces the bias of high cosine values in the distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is also visible in Figure 3, where more filaments are perpen- dicular to the z-axis in mixed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' When using the results obtained with a full spectroscopic sample σ0 as ground truth, we can compare other results to it and construct contingency tables called confusion matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We assign a binary label for each coordinate depending on the visit map value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' If the visit map value is equal to or greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='05, then that coordinate is classified as inside a filament.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' With each coordinate labelled, we can assign four kinds of results: true pos- itive, true negative, false positive, and false negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To describe the goodness of the results for sample s, we use two statistics: the recall Recalls = TPs Pσ0 , (1) where TPs is the number of true-positive values in the sample s, and Pσ0 the number of positive values in the reference sample σ0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' and the false discovery rate, False discovery rates = FPs Ps , (2) where FPs is the number of false-positive values in the sample s, and Ps the number of all positive values in the sample s, which includes both the true-positive and false-positive values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Recall shows the fraction of filaments the model is able to find com- pared to the filaments present in results obtained with the sam- ple σ0, which we want to maximise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The false discovery rate describes the fraction of false filaments in the results, which we want to minimise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Figure 5 shows the recall and the false discovery rates for dif- ferent samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As expected, the recall decreases monotonically when photometric uncertainties increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using mixed samples improves the recall even when using small fractions of spec- troscopic galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, this improvement can be seen when comparing the recalls of σ5 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='45) and σ5s10 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='54) or σ10 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='27) and σ10s10 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='40), both mixed samples use only 10% of the spectroscopic galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using 50% of the spectro- scopic galaxies boosts the recall above 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='73, which means almost three-quarters of the original filaments are detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As seen in Figure 5, the false discovery rate is below 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='05 for every sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This shows that the Bisous model produces only little noise and false-positive values even with photometric redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In addition, we ran Bisous on mock data without using the method described in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3 and using the samples σ5, σ5s30, σ10, and σ10s30 as they are.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This enables us to compare the Bisous model results obtained with the method in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3 with results obtained with photometric data without doing anything special to the photometric galaxies and ignoring photometric redshift errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Table 2 lists the different statistics introduced in this section calculated for these Bisous runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' These results are calculated as a reference and motivation for using the method Article number, page 5 of 10 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 z (Mpc) 50 75 100 125 150 Photometric-only σ0 z (Mpc) 50 75 100 125 150 σ2 z (Mpc) 50 75 100 125 150 σ5 x (Mpc) 30 55 80 105 130 z (Mpc) 50 75 100 125 150 σ10 Mixed samples σ=5 Mpc σ5s50 σ5s30 σ5s20 x (Mpc) 55 80 105 130 σ5s10 Mixed samples σ=10 Mpc σ10s50 σ10s30 σ10s20 x (Mpc) 55 80 105 130 σ10s10 Visit map 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Projections of maximum visit map values in slices obtained from the Bisous model using different samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Only a smaller 100 Mpc × 100 Mpc area is shown for visual clarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The thickness of the slice is 10 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Usually, a visit map limit of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='05 is used to clas- sify whether or not a coordinate is inside a filament.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Therefore, everything besides the blue area is likely part of the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The vertical axis (z) is parallel to the axis of the photometric uncertainties, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' it emulates the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The leftmost column shows samples with only photometric galaxies, the middle column shows mixed samples with medium uncertainties (σ = 5 Mpc) for photometric galaxies, and the rightmost column shows mixed samples with the larger uncertainties (σ = 10 Mpc).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Different rows in the leftmost column have different pho- tometric uncertainties, and the middle and the rightmost column have different fractions of the brightest galaxies as spectroscopic galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' See Table 1 and Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='3 for the sample naming convention used here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Article number, page 6 of 10 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel: Using photometric redshift data with the Bisous model Samples σ0 σ1 σ2 σ3 σ5 σ7 σ10 σ5s50 σ5s40 σ5s30 σ5s20 σ5s10 σ10s50 σ10s40 σ10s30 σ10s20 σ10s10 cos ∠(fil, los) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Distributions of the cosine of the angle between filament spines (fil) and the line of sight (los).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For each sample, there are two plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The left one is a bar plot of the quartiles of the distribution, where the black crossbar indicates the second quartile (the median).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The right plot is a violin plot that shows the density curve of the distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The horizontal grey line indicates the median value for a uniform distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The closer the distribution gets to value 1, the more filaments are parallel to the line of sight (z-axis in other plots).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Samples σ1 σ2 σ3 σ5 σ7 σ10 σ5s50 σ5s40 σ5s30 σ5s20 σ5s10 σ10s50 σ10s40 σ10s30 σ10s20 σ10s10 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='0 False Discovery Rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='10 Recall False Discovery Rate Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Recall and false discovery rates for photometric and mixed samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' All the samples have the same total number of galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The definitions for recall and false discovery rate are given in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The false discovery rate uses the secondary vertical axis on the right side of the plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Including spectroscopic galaxies improves recall but also increases false discovery rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The false discovery rates are below 5% for every sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' described in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In comparison to the samples introduced in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='3 these results show significantly worse recall values and fewer galaxies in filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In some cases, the false dis- covery rate can have better results, but this comes from the fact that when detecting fewer filaments, there are also fewer false- positive results and therefore a lower false discovery rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' These results qualitatively confirm the results from Kruuse et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (2019), showing that galaxies with photometric redshifts are clustered around the Bisous filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We show that the Bisous model can use photometric redshift data to detect the fila- mentary network without producing significant amounts of false- positive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' However, when the uncertainties in the distance measure increase, the model is able to recall fewer filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For example, with sample σ10, the recall is only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='27, and mostly fil- aments parallel to the line of sight are detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Including spec- troscopic galaxies in the samples considerably improves the re- call and helps to mitigate the issue with filament alignment in the detected filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Results in this work also qualita- tively follow the results of Muru & Tempel (2021), where they show how the Bisous filaments depend on the number density of the galaxies in the input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In this work, the mixed sam- ples show a similar trend, and the photometric galaxies boost the number density of galaxies, although less than the same number of spectroscopic galaxies would.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Discussion Previous works applied the Bisous model to SDSS, which is a spectroscopic survey, and compiled a catalogue of filaments (Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This work extends the applicability of the model and demonstrates the effects of using data with photo- metric redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The major benefit of using photometric red- shift data comes from its comparatively high availability, and measurements are made in bulk, and not for single galaxies as in spectroscopic measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The problem with photometric redshift data is the significantly larger uncertainties when calcu- Article number, page 7 of 10 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Comparison of Bisous model results with and without using the method described in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' PB in front of the sample name indicates that the results are obtained with the plain Bisous model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Recall and false discovery rate are defined by Equations 1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Sample gal in fila Recall FDRb PB σ5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='217 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='296 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='037 σ5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='329 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='452 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='029 PB σ5s30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='398 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='541 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='039 σ5s30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='501 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='679 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='042 PB σ10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='127 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='172 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='039 σ10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='200 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='276 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='025 PB σ10s30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='348 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='473 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='039 σ10s30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='441 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='598 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='042 Notes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' a Fraction of galaxies in filaments;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' b false discovery rate lating redshifts, which results in larger uncertainties in distance measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is problematic for filament finders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To simulate the large uncertainties in distance measurements, we used simulation data to create data with added uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For simplicity, all the uncertainties are generated with the same Gaussian distribution for each galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In reality, the uncertainty depends on many properties, and one of the most relevant is the magnitude of the galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' But the dependence of the uncer- tainty on the magnitude is different for different surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Also, using a simulation removes any redshift dependence in the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In observations, there are two major redshift-dependent effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Firstly, the number density of galaxies decreases with redshift as we are able to detect fewer galaxies on the fainter end, and this affects the ability of the Bisous model to detect filaments as shown in Muru & Tempel (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Secondly, the precision of the photometric redshift values for galaxies depends on their actual distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' These dependencies should be studied in greater depth when concentrating on specific surveys and is outside the scope of this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The method we use to overcome this problem of large un- certainties is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Essentially, we are just guessing the true positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Each galaxy gets 80 different random posi- tions based on the uncertainties of the redshift estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The the- ory behind this approach is that while random inaccurate posi- tions produce noise, the positions close to the true position of the galaxy produce a strong enough signal to be above the noise level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Regardless of its simplicity, the method shows consider- able improvements over results when not using this method (see Table 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Although this simple method improves the results, the prob- lems introduced by using the photometric redshift data are still prevalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using photometric-only redshift data (σXX samples) results in part of the signal being lost and an incomplete fila- mentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is visible from the recall values when compared against σ0 (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 5), the fraction of galaxies in fila- ments (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2), and the projections of visit map values (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Another problem is that with larger uncertainties for distances, the filaments perpendicular to the line of sight are almost impos- sible to detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This creates a strong bias for filaments parallel to the line of sight (cf Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' It is important to note that the false discovery rate (cf Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 5) decreases when data with larger un- certainties are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is because galaxies with larger uncer- tainties produce less meaningful signals, and therefore there will be fewer filaments in the results, which also means fewer false- positive filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Low false discovery rate values are good be- cause they demonstrate the robustness of the results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The model rather outputs fewer filaments than false-positive filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' All of the aforementioned problems are reduced by using mixed samples of spectroscopic and photometric redshift data instead only photometric, as shown in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Figure 2 also shows that using mixed samples to boost the galaxy number den- sity is better than only using the spectroscopic redshift galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This could be useful, for example, in the more distant areas of spectroscopic surveys, where galaxies with spectroscopic red- shifts are too sparse to use for the detection of the large-scale structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using mixed data could help us extend the area where we can reliably detect the filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Still, this method requires photometric redshift data with rel- atively small uncertainties, which are not usually achieved by photometric surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Unfortunately, all current photometric sur- veys have unusably large uncertainties for the redshifts, but there will be some new surveys with suitable accuracy in the near fu- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' One prominent candidate for photometric redshift data is the upcoming Javalambre Physics of the Accelerating Universe Astrophysical Survey (J-PAS, Benitez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Bonoli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Laur et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J-PAS is designed to measure the po- sitions and redshifts of 14 million galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' And the estimated precision for the photometric redshift for galaxies in the red- shift range 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1 < z < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='2 is σz ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='003(1 + z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For exam- ple, when using SDSS, the spectroscopic redshift galaxy number density is high enough to detect some filaments up to a distance of 400 Mpc, which is approximately z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1 (Tempel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru & Tempel 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For this distance, the precision of the redshifts is σz ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='003 × 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1 ≈ 14 Mpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This is the same order of magnitude as the σ10 samples used in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We expect the uncertainties to be smaller for brighter galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We aim to apply the Bisous model to J-PAS data when it is released and compile a catalogue of filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To obtain the mixed data of photometric and spectroscopic redshift galaxies, we plan to use the Sloan Digital Sky Survey (SDSS) (Alam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015) and the Dark Energy Spectroscopic Instrument (DESI) Bright Galaxy Survey (BGS) (Dey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Ruiz-Macias et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Although this study is based on the Bisous filament finder, it is likely that the general tendencies when using data with pho- tometric redshifts are similar with other filament finders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using photometric data will decrease the effectiveness of the filament finder, and filaments parallel to the line of sight are more likely to be detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' It is uncertain whether using mixed data of pho- tometric and spectroscopic redshifts improves the results com- pared to using only spectroscopic data when using other filament finders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Also, the false discovery rates might have different val- ues for other filament finders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The advantage of the Bisous model is that it models the underlying filamentary network, and galax- ies are only used to constrain the model properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Hence, in the Bisous filament finder, it is straightforward to combine spectro- scopic and photometric samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' While fixing the scale of the filaments in the Bisous model, we are free from smoothing the galaxy distribution, and the Bisous model is able to detect fila- ments with a specified scale regardless of the galaxy density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As mentioned in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 1, one common application for fila- ments is to study the alignment of galaxies and their host fila- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This means that obtaining the accurate filament orienta- tion from the data is instrumental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' In future studies, we aim to improve the Bisous model to reduce the alignment bias of fila- ments when using data with photometric redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Article number, page 8 of 10 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Muru and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Tempel: Using photometric redshift data with the Bisous model 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Conclusions Filament finders are limited, among other things, by the abun- dance of spectroscopic redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This limits the sky areas and depth where we can detect the filamentary network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' As pho- tometric redshift data can be obtained on shorter timescales, because you can observe many objects simultaneously, there are many more photometric redshift data available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We present a method that enables the Bisous filament finder to use data with considerable uncertainties in one coordinate;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' for example photometric redshift data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We use MultiDark-Galaxies, a dark matter-only simulation with semi-analytical galaxies, to gener- ate the data for analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Spectroscopic redshift data are simply the exact positions of galaxies from the simulation, and photo- metric redshift galaxies have added random error with Gaussian distribution in one axis, where this latter represents the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This work uses three types of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Firstly, spectro- scopic samples with different magnitude cuts for reference val- ues for other samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Secondly, photometric samples using dif- ferent standard deviations from σ = 1 Mpc to 10 Mpc to gener- ate the errors with different sizes for distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Thirdly, mixed samples, where in different samples 10 % to 50 % of the bright- est galaxies have spectroscopic redshifts, that is, they have exact distance measurements, and other galaxies have distances with uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' An overview of the samples used in this work is given in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The Bisous model uses a marked point process to fit cylinder- like objects to the underlying galaxy distribution and optimises the distribution of objects based on the galaxy distribution and the interconnectedness of the cylinder network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' To use the pho- tometric redshift data with uncertainties along one axis, we mod- ified the coordinates along that axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Knowing the distribution of the uncertainties for the distance of photometric redshift galax- ies, we use the same distribution to add a random value to the distance of a galaxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For each Bisous run, we generated a new galaxy distribution, where each photometric redshift galaxy has a different random value added to its distance based on the uncer- tainty distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Each Bisous model uses 80 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The theory underpinning this approach is that those runs, where some galax- ies have random distance values that are closer to their true dis- tances, produce strong signals, and others with scrambled galaxy distributions just produce noise, which will be removed in the post-processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using photometric-only samples shows that when uncertain- ties are very small, Gaussian distribution with σ = 1 Mpc or 2 Mpc, the Bisous model can find most of the same filaments as in the full spectroscopic sample σ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Unfortunately, these uncer- tainties are unachievable for modern or even future planned pho- tometric surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' With larger uncertainties in the photometric- only samples, the ability to recall the filaments in the refer- ence sample drops below 50%, and the filaments align with the line of sight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using mixed samples of photometric and spectro- scopic data helps to reduce the mentioned problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For exam- ple, a comparison between three samples: a spectroscopic-only sample s30, which uses only 30% of the brightest galaxies, a photometric-only sample σ10, which uses data with errors gen- erated with σ = 10 Mpc, and a mixed sample σ10s30, which uses the same standard deviation (σ = 10 Mpc) for errors and the same amount of spectroscopic galaxies (30% of the whole sample).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using the spectroscopic data, which contain only 30% of the brightest galaxies, results in 36% of galaxies being in- side filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Using only photometric data, which contain all the galaxies, but have uncertainties in one coordinate, we find that 20% of galaxies are inside filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' And finally, using the mixed data, which contain more data than the spectroscopic sam- ple and, in contrast to the photometric sample, also incorporate 30% of the spectroscopic data, the Bisous model finds that 40% of the galaxies are inside filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The reference value for these galaxies and the volume is from the full spectroscopic sample, which gives a value of 71% of galaxies in filaments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Adding the spectroscopic galaxies from the sample s30 to the photometric sample σ10 increases the recall of filaments from 27% to 60%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' This shows that using mixed data is beneficial when spectro- scopic data are too sparse and photometric data have excessively large uncertainties to be used without spectroscopic data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J-PAS is an upcoming photometric survey that is designed to produce data with sufficiently small uncertainties to be appli- cable to a method such as the one in this article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The expected precision of the redshifts is σz ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='003(1 + z) (Benitez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' For a distance of about z = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1, this is σz ≲ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='003 × 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1 ≈ 14 Mpc, which is close to the values used in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The next step is to apply the Bisous model to J-PAS data once available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We thank the referee for their comments and suggested im- provements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Part of this work was supported by institutional research funding PRG1006 of the Estonian Ministry of Education and Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' We acknowledge the support by the Centre of Excellence "Dark Side of the Universe" (TK133).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Part of this work was carried out in the High-Performance Computing Center of the University of Tartu (University of Tartu 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The MultiDark database was developed in cooperation with the Spanish Multi- Dark Consolider Project CSD2009-00064.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The authors gratefully acknowledge the Gauss Centre for Supercomputing e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='gauss-centre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='eu) and the Part- nership for Advanced Supercomputing in Europe (PRACE, www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='prace-ri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='lrz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='de).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' The data exploration was done using TOPCAT (Taylor 2005), and analysis and plotting were done using Julia Language (Bezanson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2017) and the following packages: DrWatson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl (Datseris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020), Pluto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl (van der Plas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2022), Makie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl (Danisch & Krumbiegel 2021), DataFrames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020), Distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl (Besançon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021), ColorSchemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl, which uses Scientific colour maps (Crameri 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' References Alam, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Albareti, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Allende Prieto, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015, ApJS, 219, 12 Alpaslan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Driver, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Robotham, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015, MNRAS, 451, 3249 Aragón-Calvo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', van de Weygaert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Jones, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & van der Hulst, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2007, ApJ, 655, L5 Beck, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Dobos, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Budavári, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Szalay, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Csabai, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016, MNRAS, 460, 1371 Benitez, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Dupke, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Moles, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014, arXiv e-prints, arXiv:1403.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='5237 Besançon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Papamarkou, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Anthoff, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, Journal of Statistical Software, 98, 1 Bezanson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Edelman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Karpinski, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Shah, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2017, SIAM Review, 59, 65 Bonoli, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Marín-Franch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Varela, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020, arXiv e-prints, arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='01910 Cautun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', van de Weygaert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Jones, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2013, MNRAS, 429, 1286 Crameri, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, Scientific colour maps, The development of the Scientific colour maps is not funded any longer, but will continue as a pro bono project for the scientific community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' - Fabio Danisch, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' & Krumbiegel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, Journal of Open Source Software, 6, 3349 Datseris, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Isensee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Pech, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Gál, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020, Journal of Open Source Soft- ware, 5, 2673 de Jong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Agertz, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Berbel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019, The Messenger, 175, 3 Dey, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Schlegel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Lang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019, AJ, 157, 168 Eisenstein, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Weinberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Agol, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2011, AJ, 142, 72 Ganeshaiah Veena, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Cautun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', van de Weygaert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Frenk, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019, MNRAS, 487, 1607 Klypin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Yepes, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Gottlöber, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Prada, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Heß, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016, MNRAS, 457, 4340 Knebe, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Gill, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Gibson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2004, ApJ, 603, 7 Knebe, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Stoppacher, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Prada, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2018, MNRAS, 474, 5206 Kraljic, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Davé, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Pichon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020, MNRAS, 493, 362 Kruuse, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Kipper, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Stoica, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2019, A&A, 625, A130 Kuutma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tamm, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2017, A&A, 600, L6 Article number, page 9 of 10 A&A proofs: manuscript no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' main_texlive2020 Laur, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tamm, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2022, A&A, 668, A8 Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' & Pen, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2000, ApJ, 532, L5 Libeskind, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', van de Weygaert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Cautun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2018, MNRAS, 473, 1195 Muru, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' & Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, A&A, 649, A108 Nevalainen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Liivamägi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015, A&A, 583, A142 Planck Collaboration, Adam, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Ade, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016, A&A, 594, A1 Ruiz-Macias, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Zarrouk, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Cole, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, MNRAS, 502, 4328 Sousbie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2011, MNRAS, 414, 350 Taylor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2005, in Astronomical Society of the Pacific Conference Se- ries, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 347, Astronomical Data Analysis Software and Systems XIV, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Shopbell, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Britton, & R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Ebert, 29 Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Guo, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Kipper, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Libeskind, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2015, MNRAS, 450, 2727 Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' & Libeskind, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2013, ApJ, 775, L42 Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Stoica, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Kipper, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Saar, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2016, Astronomy and Comput- ing, 16, 17 Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Stoica, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Martínez, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2014, MNRAS, 438, 3465 Tuominen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Nevalainen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2021, A&A, 646, A156 University of Tartu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2018, UT Rocket van der Plas, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Dral, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Berg, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2022, fonsp/Pluto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl: v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='11 Wang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Libeskind, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Tempel, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020, ApJ, 900, 129 White, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Kami´nski, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', powerdistribution, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2020, Julia- Data/DataFrames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='jl: v0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content='1 Zentner, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Kravtsov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', Gnedin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=', & Klypin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} +page_content=' 2005, ApJ, 629, 219 Article number, page 10 of 10' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf'} diff --git a/UNE3T4oBgHgl3EQfagoR/content/tmp_files/2301.04506v1.pdf.txt b/UNE3T4oBgHgl3EQfagoR/content/tmp_files/2301.04506v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ea5f10e3cfb3ef740145da4dc61ae8c0e6aaa01a --- /dev/null +++ b/UNE3T4oBgHgl3EQfagoR/content/tmp_files/2301.04506v1.pdf.txt @@ -0,0 +1,1579 @@ +Under Review +A DISTINCT UNSUPERVISED REFERENCE MODEL FROM +THE ENVIRONMENT HELPS CONTINUAL LEARNING +Seyyed AmirHossein Ameli Kalkhoran1 +Mohammadamin Banayeeanzade2 +Mahdi Samiei1 +Mahdieh Soleymani Baghshah1 +1Department of Computer Science, Sharif University of Technology +2Department of Electrical and Computer Engineering, USC Viterbi School of Engineering +ABSTRACT +The existing continual learning methods are mainly focused on fully-supervised +scenarios and are still not able to take advantage of unlabeled data available in the +environment. Some recent works tried to investigate semi-supervised continual +learning (SSCL) settings in which the unlabeled data are available, but it is only +from the same distribution as the labeled data. This assumption is still not general +enough for real-world applications and restricts the utilization of unsupervised +data. In this work, we introduce Open-Set Semi-Supervised Continual Learning +(OSSCL), a more realistic semi-supervised continual learning setting in which out- +of-distribution (OoD) unlabeled samples in the environment are assumed to coexist +with the in-distribution ones. Under this configuration, we present a model with two +distinct parts: (i) the reference network captures general-purpose and task-agnostic +knowledge in the environment by using a broad spectrum of unlabeled samples, (ii) +the learner network is designed to learn task-specific representations by exploiting +supervised samples. The reference model both provides a pivotal representation +space and also segregates unlabeled data to exploit them more efficiently. By +performing a diverse range of experiments, we show the superior performance of +our model compared with other competitors and prove the effectiveness of each +component of the proposed model. +1 +INTRODUCTION +In a real-world continual learning (CL) problem, the agent has to learn from a non-i.i.d. stream of +samples with serious restrictions on storing data. In this case, the agent must be prone to catastrophic +forgetting during training (French, 1999). The existing CL methods are mainly focused on supervised +scenarios and can be categorized into three main approaches (Parisi et al., 2019): (i) Replay-based +methods reuse samples from previous tasks either by keeping raw samples in a limited memory +buffer (Rebuffi et al., 2017; Lopez-Paz & Ranzato, 2017; Aljundi et al., 2019) or by generating +pseudo-samples from previous classes (Shin et al., 2017; Wu et al., 2018; van de Ven et al., 2020). (ii) +Regularization-based methods aim to maintain the stability of the network across tasks by penalizing +deviation from the previously learned representations or parameters (Nguyen et al., 2018; Cha et al., +2021; Rebuffi et al., 2017; Li & Hoiem, 2016). (iii) Methods based on parameter isolation dedicate +distinct parameters to each task by introducing new task-specific weights or masks (Rusu et al., 2016; +Yoon et al., 2018; Wortsman et al., 2020). +Humans, as intelligent agents, are constantly in contact with tons of unsupervised data being endlessly +streamed in the environment that can be used to facilitate concept learning in the brain (Zhuang et al., +2021; Bi & Poo, 1998; Hinton & Sejnowski, 1999). With this in mind, an important but less explored +issue in many practical CL applications is how to effectively utilize a vast stream of unlabeled data +along with limited labeled samples. Recently, efforts have been made in this direction leading to the +investigation of three different configurations: Wang et al. (2021) introduced a very restricted scenario +for semi-supervised continual learning in which the unsupervised data are only from the classes +which are being learned at the current time step. On the other hand, Lee et al. (2019) introduced +1 +arXiv:2301.04506v1 [cs.LG] 11 Jan 2023 + +Under Review +a configuration that is "more similar to self-taught learning rather than semi-supervised learning". +In fact, they introduced a setting in which the model is exposed to plenty of labeled samples which +is a necessary assumption for their model to achieve a good performance; in addition, their model +has access to a large corpse of unsupervised data in an environment that typically does not include +samples related to the current CL problem. By adopting this idea, Smith et al. (2021) proposed a +more realistic setting by assuming a limitation on the number of supervised samples available for the +training. In addition to that, they assumed the existence of a shared hidden hierarchy between the +supervised and unsupervised samples, which is not necessarily true for practical applications. +In this work, we will first propose a general scenario to unify the mentioned configurations into a more +realistic setting called Open-Set Semi-Supervised Continual Learning (OSSCL). In this scenario, +the agent can observe unsupervised data from two sources: (i) Related unsupervised data, which are +sampled from the same distribution as the supervised dataset, and (ii) Unrelated unsupervised data +which have a different distribution from the classes of the current CL problem. The in-distribution +unsupervised samples can be from the classes that are being solved, have been solved at previous +time steps, or are going to be solved in the future. +Previous CL works in which unlabeled data was available alongside labeled data, mainly utilized +unlabeled data by creating pseudo-labels for them using a model which is trained by labeled samples +(Lee et al., 2019; Smith et al., 2021; Wang et al., 2021). Those unlabeled data with their pseudo-labels +were used directly in the training procedure. However, due to the fact that labeled data are scarce +in realistic scenarios, the pseudo-labeling process will be inaccurate and creates highly noisy labels. +Therefore, we present a novel method to learn in the OSSCL setting which alleviates the mentioned +problem and utilizes unlabeled data effectively. +Our proposed model, which is consisted of an Unsupervised Reference network and a Supervised +Learner network (URSL), can effectively absorb information by leveraging contrastive learning +techniques combined with knowledge distillation methods in the representation space. While the +reference network is mainly responsible for learning general knowledge from unlabeled data, the +learner network is expected to capture task-specific information from a few supervised samples using +a contrastive loss function. In addition, the learner retains a close connection to the reference network +to utilize the essential related information provided by unsupervised samples. At the same time, the +representation space learned in the reference network can be utilized to provide an out-of-distribution +detector that segregates unlabeled data to employ the filtered ones more properly in the training +procedure of the learner model. In short, our main contributions are as follows: +• We propose OSSCL as a realistic semi-supervised continual learning scenario that an +intelligent agent encounters in practical applications (Section 2). +• We propose a novel dual-structured model that is suitable for learning in the mentioned +scenario and can effectively exploit unlabeled samples (Section 3). +• We show the superiority of our method in several benchmarks and different combinations +of unlabeled samples. our model achieves state-of-the-art accuracy with a notable gap +compared to the baselines and previous methods (Section 4). +2 +PRELIMINARIES +In this work, we consider the training dataset to consist of two parts; the supervised dataset Dsup is a +sequence of T tasks {T1, T2, ..., TT }. At time step t, the model only has access to Tt = {(xi, yi)}Nt +i=1 +where xi +i.i.d. +∼ +P(X|yi) denotes a training sample and yi represents its corresponding label. We +consider K separate classes at each task and follow the common class-incremental setting as it is +shown to be the most challenging scenario for evaluation. Given a training loss ℓ and the network +parameters θ, the training objective at time step t is defined as θ∗ = arg minθ +1 +Nt +�Nt +i=1 ℓ(xi, yi, θ). +On the other hand, the unsupervised dataset Dunsup is a sequence of T sets {U1, U2, ..., UT } contain- +ing only unlabeled data points. We assume that Ut represents the unsupervised data available in the +environment at time step t, which is accessible by the model along with Tt. Based on the OSSCL +setting which is a general framework, we assume that the unsupervised dataset is composed of two +parts: (i) The related part, also called the in-distribution set, is consisted of unsupervised samples +2 + +Under Review +Figure 1: +A schematic of the method and configuration. The unsupervised reference (URt), +supervised learner (SLt), labeled data (Tt), and related and unrelated unlabeled data (Ut) at time step +t are shown on the left while the OoD segregation module is shown on the right of the figure. +generated from the same distribution as Dsup. In order to maintain generality, we assume that this set +consists not only of unsupervised samples related to the current supervised task but also of the other +tasks of the CL problem that have either been observed in previous time steps or will be observed +in the future. (ii) The unrelated data points, also called the out-of-distribution samples, are a set of +unsupervised data sampled from the distribution Q, which is not necessarily the distribution from +which the supervised samples have been generated. In the next section, we will propose a novel +method to perform in this configuration, and in Section 4, a variety of experiments are provided to +show the effectiveness of our model. +3 +METHOD +Learning continually from Dsup has been widely explored by the community. Meanwhile, unlike +deep models, humans are less hungry for supervised data. Although they observe a large volume of +data during their lifetime, only a small and insignificant portion of this data is labeled. It is believed +that the considerable human ability to learn with a few instances is due to the rich representations +learned from the large volumes of unsupervised observations (Zhuang et al., 2021; Bi & Poo, 1998; +Hinton & Sejnowski, 1999). Here, we aim to explore the benefits of using Dunsup and its impact on +empowering the continual learner. Specifically, we will show how Dunsup will promote representation +learning in addition to providing positive forward/backward transfer in the continual learning process. +We propose our URSL model, which is consisted of two parts: 1) The general task-agnostic reference +network, which is responsible for absorbing information from unsupervised data in the environment, +and 2) the learner network, which is designed to capture knowledge from a few supervised samples +while it is also guided by the reference network. The notation URt and SLt are used to respectively +demonstrate the reference and learner network instances at time step t (refer to Figure 1 and Algorithm +1 for an overview). +We employ a contrastive representation learning approach for training both the reference and the +learner networks. This approach has been proven to be a proper solution for supervised CL problems. +Indeed, some previous works in CL claimed that classifier heads placed on top of the representation +network are the serious sources of catastrophic forgetting (Ramasesh et al., 2021; Banayeeanzade +et al., 2021; Cha et al., 2021), therefore, Co2L (Cha et al., 2021) presented a supervised contrastive +loss to avoid this problem. We utilize contrastive representation learning as a unified approach for +training both the reference and the learner networks, which allows information to flow between these +networks easily. Combined with knowledge distillation techniques applied in the representation space, +this approach provides a convenient tool to exploit the most out of unsupervised samples. +3 + +OoD Detection +Ut: Unsupervised Samples +Related +Unrelated +¥ Out-of-distribution samples +O In-distribution samples (Ut) +● Pseudo-labeled samples (Tt) +→ Class Prototypes (Pt ) +Environment +UR1 +URt +UR2 +Tt +SL1 +SL2 +SLt +Supervisor +sup +T1 +T2 +T: Supervised Task +Bug +Fish +Car +BirdUnder Review +Our model is also equipped with an exemplar memory M to randomly store a portion of supervised +samples from previous tasks (Lopez-Paz & Ranzato, 2017; Rebuffi et al., 2017). The stored samples +will contribute to the training of the learner network. After the final time step, these samples are +also used to train a classifier head on top of the representation space of the learner network. It is +noteworthy that our model does not store unlabeled data in its own memory since this data is always +found in abundance in the environment, and this makes our model needless of a large memory. +3.1 +REFERENCE NETWORK +The unsupervised reference network URt : X → Rd is a general-purpose feature extractor responsi- +ble for encoding all kinds of unsupervised information available in the environment. The network is +composed of an encoder f, and a projector g, responsible for embedding input x in the representation +space by z = (f ◦ g)θt(x) where z is on the unit d-dimensional Euclidean sphere and θt represents +the model parameters at time step t. Considering a batch B ⊆ Ut with size N, the SimCLR (Chen +et al., 2020a) loss function used for training the network can be written as: +hi,j = − log +exp(˜zi.˜zj/τ) +2N +� +k=1 +1[k̸=i] exp(˜zi.˜zk/τ) +, +Lunsup(θt; τ) = +1 +2N +N +� +k=1 +(h2k−1,2k + h2k,2k−1), +(1) +where ˜z2i and ˜z2i−1 are the representations of two different augmentations of the same image xi ∈ B +and τ is the temperature hyperparameter. +3.2 +SEGREGATING UNSUPERVISED SAMPLES +In this section, we show how to segregate unlabeled samples by employing the reference network +and supervised samples. Although unsupervised samples can play an important role in both learning +the representation space and controlling changes in this space through time, naive approaches to +incorporating these samples into the training of the learner network can lead to inferior performance +due to the existence of unrelated samples among unlabeled ones. Therefore, we will first explain +the OoD detection method, which is designed to segregate unlabeled data and incorporate them +more properly in the continual learning process of the learner network. To efficiently segregate +unsupervised data, we employ a prototypical-based OoD detection method (Park et al., 2021) in +the representation space of the reference network using samples in Tt ∪ M. It is noteworthy that +the representation space of the reference network is chosen for OoD detection since it provides +better sample discrimination than any other representation space obtained by training over a small +number of labeled samples. Additionally, this approach eliminates the need to train another network +specialized in OoD detection in contrast to the previous works (Chen et al., 2020b; Huang et al., +2021; Saito et al., 2021). +At time step t, our OoD method creates Pt = +� +Pt +1, Pt +2, . . . , Pt +K×t +� +, a set of K × t prototypes +representing the centroids of observed classes so far, which is extracted using the labeled data +available in Tt ∪ M: +Pt +i = ψ +� +1 +|A| +� +(xj ,yj )∈Tt∪M +1[yj =i] +� +(xj,yj)∈Tt∪M +1[yj=i] +� +a∈A(f ◦ g)θt(a(xj)) +� +, +(2) +where A is a set of augmentations meant to form different views of a real image, and ψ is the operator +that projects vectors into the unit d-dimensional sphere. We also define the score operator S (Pt, z) = +max +i +c (Pt +i , z) where c denotes the cosine similarity measure. This operator takes prototypes in +addition to a sample in the representation space and calculates the score of its most probable +assignment. With this in mind, we consider St +l as the scores of the labeled data obtained by passing +Tt∪M through the S (Pt, .) operator, i.e. : St +l = {S(Pt, (f ◦g)θt(x))|x ∈ Tt∪M} . By considering +ηid as a hyperparameter, we define a threshold τid = mean (St +l ) + ηidvar (St +l ) on the scores of +unlabeled data to specify in-distribution samples as ˆUt = {x|x ∈ Ut, S(Pt, x) > τid}. Furthermore, +we assign pseudo-labels to the unsupervised samples on which we have superior confidence by +defining a higher threshold τpl = mean (St +l ) + ηplvar (St +l ), with the hyperparameter ηpl, and prepare +4 + +Under Review +pseudo-labeled samples as ˆTt = {(x, ˆy)|x ∈ Ut, S(Pt, x) > τpl, ˆy = arg maxi c(Pt +i , x)}. In other +words, an unsupervised sample with a similarity value higher than τpl to a class prototype is pseudo- +labeled to that class. However, to reduce pseudo-labeling noise, we do not utilize pseudo-labels +directly during the training procedure. Those pseudo-labels are used to identify whether this unlabeled +data is from past classes or not. Samples of ˆTt are mainly used to compensate for the small number +of supervised samples in the memory, as further explained in the next section. We provide a detailed +investigation of the performance of the OoD module in Appendix C. +3.3 +LEARNER NETWORK +Similar to the reference network, the learner network SLt : X → Rd is a feature extractor with the +form z = (f ◦ g)ϕt(x) where ϕt denotes the model parameters at time step t. The training of the +learner network is done using three mechanisms: +Supervised Training: Following Co2L (Cha et al., 2021), we will use an asymmetric supervised +version of the contrastive loss function to train the learner network. By considering a supervised +batch B = {(xi, yi)}N +i=1, which is sampled from Tt ∪ M ∪ ˆTt, and applying an augmentation policy +to form two different views of real samples, we can write the supervised contrastive loss as follow: +Lsup(ϕt; τ) = 1 +N +N +� +i=1 +−1[yi∈Ot] +|ζi| +� +j∈ζi +log +exp(˜zi.˜zj/τ) +N +� +k=1 +1[k̸=i] exp(˜zi.˜zk/τ) +, +(3) +where Ot is the new classes of the current time step t, and ζi are the other samples of the current +batch with the same label yi. +The existence of ˆTt is crucial for learning a proper representation since only a small amount of +labeled data is available during continual learning. In fact, Co2L intends to prevent overfitting to the +small number of past task samples stored in the memory by proposing the asymmetric supervised +contrastive loss that utilizes samples from the memory only as negative samples (Cha et al., 2021). +However, when the labeled data are limited, even employing the past samples in M, as negative +samples, still may cause overfitting. Therefore, we enrich M by ˆTt to diversify the samples from +previous classes. +Knowledge Transfer Through Time: The loss function in Eq. 3 allows the model to discriminate +between new and previous classes. However, it is not sufficient to maintain the discrimination power +of the learner network among previous tasks. Therefore to avoid catastrophic forgetting, at each +time step t, we use an instance-wise relation distillation (IRD) loss to transfer knowledge from the +previous time step to the current model (Cha et al., 2021). This self-distillation technique, which is +also compatible with the contrastive representation learning approach, retains the old knowledge by +maintaining the samples’ similarity in the representation space of the learner network. To this end, +first, we sample a batch B from Tt ∪ M ∪ ˆTt, augment each sample xi twice to create ˜x2i−1, ˜x2i, +and then calculate the instance-wise similarity vector as: +p (˜xi; ϕ, τ) = [pi,0, . . . , pi,i−1, pi,i+1, . . . , pi,2N] where pi,j = +exp(˜zi.˜zj/τ) +2N +� +k=1 +1[k̸=i] exp(˜zi.˜zk/τ) +. +(4) +By computing probabilities for both SLt and SLt−1, we can write time distillation loss as: +LT D(ϕt; ϕt−1, τ ′, τ ′′) = +2N +� +i=1 +−p(˜xi; ϕt−1, τ ′). log p(˜xi; ϕt, τ ′′), +(5) +where τ ′ and τ ′′ represent the distillation-specific temperatures for the previous model and the current +model, respectively. +Knowledge Transfer from Reference: The reference network encounters numerous unsupervised +samples throughout its training and is expected to learn a rich representation space using the objective +introduced in Section 3.1. This representation is used as guidance for the learner network, and the +knowledge can be transferred to the learner network using an IRD loss similar to the Eq. 5: +5 + +Under Review +LKD(ϕt; θt, τ ′, τ ′′) = +2N +� +i=1 +−p (˜xi; θt, τ ′) . log p (˜xi; ϕt, τ ′′). +(6) +This distillation is applied to the learner network based on the samples in Tt ∪ M ∪ ˆUt. It is +noteworthy that this distillation, rather than using all of the unsupervised samples in Ut, only uses the +unsupervised samples ˆUt, which seems to be related to the training of the learner network. +3.4 +THE URSL ALGORITHM +In summary, the model receives two sets of samples at each time step: Tt and Ut. The reference +network is trained on Ut using the self-supervised loss function introduced in Eq. 1. Then, an +OoD detection and a pseudo-labeling technique introduced in Section 3.2, are used to segregate +unsupervised samples in Ut. Finally, the learner network is trained based on the weighted aggregation +of three loss functions introduced in Section 3.3 by defining γ and λ as hyperparameters: +Ls(ϕt) = Lsup(ϕt; τ) + γLT D(ϕt; ϕt−1, τ ′, τ ′′) + λLKD(ϕt; θt, τ ′, τ ′′). +(7) +Algorithm 1 URSL: Unsupervised Reference and Supervised Learner +Require: A supervised dataset Dsup = {Tt}T +t=1 and an unsupervised dataset Dunsup = {Ut}T +t=1 +1: initialize UR0 and SL0 respectively with random parameters θ0 and ϕ0 +2: for t = 1, ..., T do +3: +Initialize θt ← θt−1 and ϕt ← ϕt−1 +4: +Update θt based on Ut to minimize Lunsup(θt; τ) (Eq. 1) +5: +Extract Pt using Tt ∪ M (Eq. 2) +6: +Compute St +l from Tt ∪ M (Section. 3.2) +7: +Compute τid ← mean (St +l ) + ηidvar (St +l ) and τpl ← mean (St +l ) + ηplvar (St +l ) +8: +Prepare ˆTt and ˆUt based on τid, τpl, and scores of Ut (Section 3.2) +9: +while not done do +10: +Sample a batch B from Tt ∪ M ∪ ˆTt +11: +Compute Ls ← Lsup(ϕt; τ) based on B (Eq. 3) +12: +if t > 1 then +13: +Update Ls ← Ls + γLT D(ϕt; ϕt−1, τ ′, τ ′′) based on B (Eq. 5) +14: +Update Ls ← Ls + λLKD(ϕt; θt, τ ′, τ ′′) based on a batch from Tt ∪ M ∪ ˆUt (Eq. 6) +15: +Update ϕt ← ϕt − α∇ϕLs +16: +Update M such that the number of samples for each class is the same. +17: Train the classifier head using TT ∪ M +4 +EXPERIMENTS +Benchmark Scenario: To demonstrate the effectiveness of our method, we have performed several +experiments in this section. We use two datasets for each experiment: the main and the peripheral. A +small portion of the main dataset, which is determined by P, is selected as supervised data, the rest +is considered as related unsupervised data, and all samples of the peripheral dataset are considered +as (probably) unrelated unlabeled data. At each time step, 9000 examples from each unsupervised +dataset are randomly sampled, shuffled together, and fed into the model as unsupervised data. In +Appendix F, we provide the results of experiments in which the number of datasets inside Ut is +greater than two datasets, and the environment is even more realistic. +The hyperparameters of our model are not dependent on the experiment configuration, and a general +and consistent solution for all conditions is provided. We conducted a wide range of experiments to +demonstrate the model’s robustness in various scenarios. In our experiments, we used the CIFAR10, +CIFAR100 (Krizhevsky et al., 2009), and Tiny-ImageNet (Le & Yang, 2015) datasets as the main +or peripheral datasets, which are commonly used datasets in the open-set semi-supervised learning +literature (Chen et al., 2020b; Huang et al., 2021; Yu et al., 2020); moreover, the settings of our +6 + +Under Review +Table 1: Accuracy of different models on the CIFAR10 dataset. +Setting +Unsupervised +Method +Dataset +Co2L +Co22L-j +Co2L-p +GD +DM +URSL +P = 0.01 +CIFAR100 +26.5±1.6 +36.0±3.2 +46.6±0.3 +24.0±1.3 +30.1±0.7 +58.2±0.8 +58.2±0.8 +58.2±0.8 +|M| = 50 +Tiny-Imagenet +26.5±1.6 +33.0±1.5 +42.7±0.2 +24.3±1.4 +29.7±6.1 +51.0±11.9 +51.0±11.9 +51.0±11.9 +P = 0.1 +CIFAR100 +58.3±1.0 +52.7±1.6 +62.4±0.1 +48.1±1.3 +59.6±4.2 +72.8±0.9 +72.8±0.9 +72.8±0.9 +|M| = 200 +Tiny-Imagenet +58.3±1.0 +42.2±2.1 +61.3±0.3 +47.3±1.4 +42.1±9.4 +72.8±0.6 +72.8±0.6 +72.8±0.6 +Co2L +GEM +iCaRL +P = 1 +None +69.5±0.6 +29.2±0.5 +49.9±1.7 +|M| = 200 +Table 2: Accuracy of different models on the CIFAR100 dataset. +Setting +Unsupervised +Method +Dataset +Co2L +Co2L-j +Co2L-p +GD +DM +URSL +P = 0.05 +CIFAR10 +15.9±0.2 +20.4±0.4 +21.0±0.2 +11.8±0.9 +24.2±0.9 +30.4±0.2 +30.4±0.2 +30.4±0.2 +|M| = 500 +Tiny-Imagenet +15.9±0.2 +16.9±0.3 +21.5±0.2 +11.5±0.9 +28.1±1.2 +30.5±0.5 +30.5±0.5 +30.5±0.5 +P = 0.1 +CIFAR10 +25.1±0.1 +26.9±0.4 +28.3±0.4 +16.7±1.1 +33.6±1.0 +37.5±0.4 +37.5±0.4 +37.5±0.4 +|M| = 1000 +Tiny-Imagenet +25.1±0.1 +28.4±1.4 +28.9±0.3 +15.9±0.2 +38.5±0.7 +38.5±0.7 +38.5±0.7 +37.2±0.3 +Co2L +GEM +iCaRL +P = 1 +None +35.1±0.3 +22.4±4.5 +34.4±0.8 +|M| = 1000 +experiments are known as the "cross dataset" setting in the open-set semi-supervised literature (Chen +et al., 2020b). We have utilized ResNet-18 architecture as the backbone of both networks with a +two-layer MLP on its head as the projector. The input images for the model are 32 x 32 pixels in +size. Additionally, we use the notation |M| to show the size of the supervised memory introduced in +Section 3. Further experimental setups and details are provided in Appendix B. +Baselines: Co2L (Cha et al., 2021) can be seen as a simplified version of URSL in which there is no +reference network and no means for using unsupervised samples. Therefore, we propose a modified +version of Co2L, Co2L-j, in which the model is trained jointly by employing both a supervised and +an unsupervised contrastive loss on the supervised and unsupervised data, respectively. In another +baseline, Co2L-p, we only pre-train the model with unsupervised data available in the first time +step and ignore the unsupervised data in the subsequent steps to avoid possible conflict with the +supervised loss during continual learning. There are also two other baselines in the prior works that +seem consistent with the OSSCL setting due to the presence of an OoD detection module. GD (Lee +et al., 2019) trained an OoD module to recognize unlabeled data from previous classes among the +entire unlabeled dataset. This in-distribution data was only used to combat catastrophic forgetting. +DM (Smith et al., 2021) mainly changed GD setting through defining some policies over unlabeled +data by using superclasses of the CIFAR100 and using the FixMatch method (Sohn et al., 2020). On +the other side, we also report results of fully supervised continual learning for two popular continual +learning models, GEM and iCaRL, and also the state-of-the-art Co2L. These methods have access to +all samples of the related dataset as labeled ones during continual learning but cannot use unlabeled +samples from any source. +7 + +Under Review +Table 3: Accuracy of different models on the Tiny-Imagenet dataset. +Setting +Unsupervised +Method +Dataset +Co2L +Co2L-j +Co2L-p +GD +DM +URSL +P = 0.05 +CIFAR10 +8.4±0.1 +11.0±0.1 +12.8±0.2 +4.54±0.02 +4.8±0.1 +17.2±0.1 +17.2±0.1 +17.2±0.1 +|M| = 1000 +CIFAR100 +8.4±0.1 +10.8±0.8 +12.9±0.1 +5.7±0.4 +4.4±0.5 +17.5±0.2 +17.5±0.2 +17.5±0.2 +P = 0.1 +CIFAR10 +15.1±0.7 +17.6±0.6 +18.4±0.2 +7.5±0.2 +5.6±0.2 +21.9±0.2 +21.9±0.2 +21.9±0.2 +|M| = 2000 +CIFAR100 +15.0±0.7 +18.4±0.7 +18.6±0.1 +7.9±0.1 +5.4±0.4 +20.8±1.0 +20.8±1.0 +20.8±1.0 +Co2L +GEM +iCaRL +P = 1 +None +22.5±0.5 +17.4±0.3 +18.2±0.2 +|M| = 2000 +Table 4: Ablation of Eq. 7 on CIFAR100 classification with CIFAR10 dataset as peripheral. +Version +URSL w/o +Lsup +URSL w/o +LT D +URSL w/o +LKD +Only Lsup +Only LKD +URSL +Acc.(%) +25.7±0.2 +28.4±0.7 +28.9±0.4 +19.1±0.9 +28.2±0.9 +30.4±0.2 +30.4±0.2 +30.4±0.2 +4.1 +RESULTS +Tables 1, 2, and 3 show the classification accuracy at the final time step when the main datasets are +selected as CIFAR10, CIFAR100, and Tiny-ImageNet, respectively. In almost all the experiments, +URSL outperforms all other baselines. There are two reasons for the superiority of URSL over GD +and DM: (i) Unlike GD and DM, which train OoD detection with a small number of labeled samples, +OoD detection of URSL is based on the representation of the reference network, which is trained +with a large amount of unlabeled data and has high discrimination power. (ii) GD only uses these +unlabeled data to solve the forgetting, while URSL uses those to transfer a rich representation from +the reference network to the learner network. Although Co2L-p and Co2L-j improved Co2L, URSL +outperformed them in all scenarios, showing the effectiveness of the proposed ideas compared with +the naive approaches for incorporating unlabeled data. Furthermore, URSL achieved comparable or +even better results than state-of-the-art full-supervised CL methods. This phenomenon suggests that +URSL can benefit from unsupervised samples to mitigate the forgetting of previous classes or to learn +a general representation that is proper for learning the classes that will be observed in the future. +We provide multiple benchmarks in Appendix D to show the robustness and power of our method. +For instance, After and Before scenarios, in which the unlabeled related samples are respectively +restricted to the future and past classes of the main dataset, prove that our method has positive forward +and backward transfer. In a Non-I.I.D. scenario, we examined our method in an environment in which +only a fraction of the classes of the main dataset are present in Ut at each time step. Additionally, +Appendix E indicates that our method can achieve remarkable performance even in situations in which +the ratio of the number of related unsupervised samples to the number of unrelated unsupervised +samples is very low. +4.2 +ABLATION STUDIES +In this section, we conducted experiments to demonstrate the contribution of different components of +the model to the final performance. To that end, we have selected CIFAR100 as the main dataset, +CIFAR10 as the peripheral dataset, P = 0.05, and |M| = 500. Table 4 indicates the model’s +performance in the experiments created by ablations over the losses of the model presented in Eq. 7: +Effect of Lsup: Lsup induces the representation of the learner network to discriminate between +classes directly by using supervised contrastive loss and labels; therefore, As the results suggest, +this loss is important and contributes to the performance of the model. Adding Lsup to the URSL +8 + +Under Review +Table 5: Ablation of OoD on CIFAR100 classification with CIFAR10 dataset as peripheral. +Experiment +Variant 1 +Variant 2 +Variant 3 +Variant 4 +Data +Eqs. 3 and 5 +Tt ∪ M +Tt ∪ M +Tt ∪ M ∪ ˆTt +Tt ∪ M ∪ ˆTt +Eq. 6 +Tt ∪ M ∪ Ut +Tt ∪ M ∪ ˆUt +Tt ∪ M ∪ Ut +Tt ∪ M ∪ ˆUt +Acc.(%) +20.7±1.2 +24.5±0.4 +29.2±0.5 +30.4±0.2 +30.4±0.2 +30.4±0.2 +w/o Lsup version is increased the performance by 4.7%. Moreover, although LKD provides great +discrimination for the learner network and achieves 28.2% accuracy, adding Lsup to this version still +enhances the performance. +Effect of LT D: The role of LT D is to transfer previously learned knowledge and reduce forgetting. +The performance of the model is increased from 19.1% to 28.9% only by adding LT D to Only Lsup +version. Furthermore, Although LKD reduces forgetting in another way, adding LT D to URSL +without LT D increased performance from 28.4% to 30.4%. All mentioned comparisons indicate that +this loss can effectively help the model to avoid forgetting. +Effect of LKD: LKD is a new way to utilize unlabeled samples of the environment. This loss intends +to transfer the reference network’s rich knowledge about the environment to the learner network. As +it is shown, using this loss alone to train the learner network achieves great performance. In addition, +adding LKD to Only Lsup increases performance from 19.1% to 28.4%. Although LKD and LT D +both reduce forgetting in different ways and have overlap in their function, adding LKD to URSL +without LKD still boosts the performance. +It is worth mentioning that performance reduction from Only LKD version to URSL without Lsup +version is because of equality of LKD and LT D coefficients, λ, and γ, respectively. The high ratio of +γ +λ prevents the model from learning new tasks and reduces the plasticity of the model. +The next study, provided in Table 5, indicates the importance of the segregation module in providing +ˆTt and ˆUt to the model. In the below paragraphs, the results are discussed: +Effect of ˆTt: Due to the fact that there exists a variety of images that differed from the current classes +among unlabeled data, we are not able to use all unlabeled data naively in Eqs. 3 and 5. Therefore, In +experiments, we investigate the effect of adding ˆTt to the data of Eqs. 3 and 5. The results indicate +the importance of adding ˆTt in preventing the overfitting caused by the limited number of data from +the past classes. Remarkable boost in the performance of Variant 1, and Variant 3 can be seen by +comparing them with Variant 2, and Variant 4, respectively. +Effect of ˆUt: Eq. 6 is designed to transfer the rich representation of the reference network to the +learner network. The results show that adding Ut to this loss naively and without segregation leads to +the transfer of irrelevant knowledge to the learner network. That unrelated transferred knowledge to +tasks prevents the model from learning a discriminative representation for the target tasks. Segregation +of Ut boosts the performance of Variant 1, and Variant 3 by 3.8% and 1.2%, respectively. +5 +CONCLUSION +In this paper, we present OSSCL, a novel setting for continual learning which is more realistic +than previously studied settings. The setting assumes that the agent has access to a large number +of unsupervised data in the environment, some of which are relevant to tasks due to the similarity +between surroundings and tasks. As a possible solution for this setting, we presented a novel model, +consisting of a supervised learner and an unsupervised reference network, to effectively utilize +both supervised and unsupervised samples. The learner network benefits from three loss functions; +the supervised loss function, which is formed based on limited supervised samples and segregated +unsupervised samples, knowledge distillation through time, and representational guidance from the +reference network. URSL has outperformed other state-of-the-art continual learning models with a +9 + +Under Review +considerable margin. The experiments and ablation studies demonstrate the superiority of the model +and the effectiveness of each of its components. +ACKNOWLEDGMENTS +We thank Fahimeh Hosseini (Sharif University of Technology) for her helpful comments and for +designing some parts of the model’s figure. +REFERENCES +Dosovitskiy Alexey, Philipp Fischer, Jost Tobias, Martin Riedmiller Springenberg, and Thomas Brox. +Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE +Trans. Pattern Analysis and Machine Intelligence, 99, 2015. +Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection +for online continual learning. In Advances in Neural Information Processing Systems, volume 32. +Curran Associates, Inc., 2019. +Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, and Ang Li. The effectiveness of memory +replay in large scale continual learning. arXiv preprint arXiv:2010.02418, 2020. +Mohammadamin Banayeeanzade, Rasoul Mirzaiezadeh, Hosein Hasani, and Mahdieh Soleymani. +Generative vs. discriminative: Rethinking the meta-continual learning. In Advances in Neural +Information Processing Systems, volume 34, pp. 21592–21604. Curran Associates, Inc., 2021. +Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: +Continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition, pp. 8218–8227, 2021. +Guo-qiang Bi and Mu-ming Poo. Synaptic modifications in cultured hippocampal neurons: depen- +dence on spike timing, synaptic strength, and postsynaptic cell type. Journal of neuroscience, 18 +(24):10464–10472, 1998. +Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. +Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural +Information Processing Systems, 33:9912–9924, 2020. +Hyuntak Cha, Jaeho Lee, and Jinwoo Shin. Co2l: Contrastive continual learning. In Proceedings of +the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9516–9525, October +2021. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for +contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a. +Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong. Semi-supervised learning under class +distribution mismatch. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, +pp. 3569–3576, 2020b. +Francesco De Comité, François Denis, Rémi Gilleron, and Fabien Letouzey. Positive and unlabeled +examples help learning. In International conference on algorithmic learning theory, pp. 219–230. +Springer, 1999. +Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, +and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. +Charles Elkan and Keith Noto. Learning classifiers from only positive and unlabeled data. In +Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data +mining, pp. 213–220, 2008. +Robert M French. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3 +(4):128–135, 1999. +10 + +Under Review +Saurabh Garg, Yifan Wu, Alexander J Smola, Sivaraman Balakrishnan, and Zachary Lipton. Mixture +proportion estimation and pu learning: A modern approach. Advances in Neural Information +Processing Systems, 34:8532–8544, 2021. +Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by +predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. +Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007. +Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena +Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, +et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural +information processing systems, 33:21271–21284, 2020. +Yunhui Guo, Mingrui Liu, Tianbao Yang, and Tajana Rosing. Improved schemes for episodic memory- +based lifelong learning. Advances in Neural Information Processing Systems, 33:1023–1035, 2020. +Geoffrey Hinton and Terrence J Sejnowski. Unsupervised learning: foundations of neural computa- +tion. MIT press, 1999. +Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. Distilling the knowledge in a neural network. arXiv +preprint arXiv:1503.02531, 2(7), 2015. +Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning a unified classifier +incrementally via rebalancing. In Proceedings of the IEEE/CVF Conference on Computer Vision +and Pattern Recognition, pp. 831–839, 2019. +Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. Generalized odin: Detecting out-of- +distribution image without learning from out-of-distribution data. In Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition, pp. 10951–10960, 2020. +Junkai Huang, Chaowei Fang, Weikai Chen, Zhenhua Chai, Xiaolin Wei, Pengxu Wei, Liang Lin, +and Guanbin Li. Trash to treasure: Harvesting ood data with cross-modal matching for open-set +semi-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer +Vision, pp. 8310–8319, 2021. +David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In Proceedings of +the AAAI Conference on Artificial Intelligence, volume 32, 2018. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. +Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint +arXiv:1610.02242, 2016. +Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015. +Dong-Hyun Lee et al. Pseudo-label: The simple and efficient semi-supervised learning method for +deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, pp. +896, 2013. +Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee. Overcoming catastrophic forgetting with +unlabeled data in the wild. In ICCV, 2019. +Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming +catastrophic forgetting by incremental moment matching. In Advances in Neural Information +Processing Systems, volume 30. Curran Associates, Inc., 2017. +Chongxuan Li, Taufik Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. Advances in +neural information processing systems, 30, 2017. +Zhizhong Li and Derek Hoiem. Learning without forgetting. In European Conference on Computer +Vision, pp. 614–629. Springer, 2016. +11 + +Under Review +David Lopez-Paz and Marc' Aurelio Ranzato. Gradient episodic memory for continual learning. +In Advances in Neural Information Processing Systems, volume 30, pp. 6467–6476. Curran +Associates, Inc., 2017. +Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. +In International Conference on Learning Representations, 2018. +Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw +puzzles. In European conference on computer vision, pp. 69–84. Springer, 2016. +German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual +lifelong learning with neural networks: A review. Neural Networks, 113:54–71, 2019. ISSN +0893-6080. +Jongjin Park, Sukmin Yun, Jongheon Jeong, and Jinwoo Shin. Opencos: Contrastive semi-supervised +learning for handling open-set unlabeled data. CoRR, abs/2107.08943, 2021. +Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le. Meta pseudo labels. In Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11557–11568, 2021. +Ameya Prabhu, Philip HS Torr, and Puneet K Dokania. Gdumb: A simple approach that questions +our progress in continual learning. In European conference on computer vision, pp. 524–540. +Springer, 2020. +Vinay Venkatesh Ramasesh, Ethan Dyer, and Maithra Raghu. Anatomy of catastrophic forgetting: +Hidden representations and task semantics. In International Conference on Learning Representa- +tions, 2021. +Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised +learning with ladder networks. Advances in neural information processing systems, 28, 2015. +S. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert. icarl: Incremental classifier and representation +learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. +5533–5542, 2017. +Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray +Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks, 2016. +Kuniaki Saito, Donghyun Kim, and Kate Saenko. Openmatch: Open-set semi-supervised learning +with open-set consistency regularization. Advances in Neural Information Processing Systems, 34: +25956–25967, 2021. +Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative +replay. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, +Inc., 2017. +Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari. Incremental learning of object detectors +without catastrophic forgetting. In Proceedings of the IEEE international conference on computer +vision, pp. 3400–3409, 2017. +James Smith, Jonathan Balloch, Yen-Chang Hsu, and Zsolt Kira. Memory-efficient semi-supervised +continual learning: The world is its own replay buffer. arXiv preprint arXiv:2101.09536, 2021. +Accepted for publication at IJCNN 2021. +Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. Advances in +neural information processing systems, 29, 2016. +Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Do- +gus Cubuk, Alexey Kurakin, and Chun-Liang Li. Fixmatch: Simplifying semi-supervised learning +with consistency and confidence. Advances in neural information processing systems, 33:596–608, +2020. +Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative +adversarial networks. arXiv preprint arXiv:1511.06390, 2015. +12 + +Under Review +Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency +targets improve semi-supervised deep learning results. Advances in neural information processing +systems, 30, 2017. +Rishabh Tiwari, Krishnateja Killamsetty, Rishabh Iyer, and Pradeep Shenoy. Gcr: Gradient coreset +based replay buffer selection for continual learning. In Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition, pp. 99–108, 2022. +Gido M. van de Ven, Hava T. Siegelmann, and Andreas S. Tolias. Brain-inspired replay for continual +learning with artificial neural networks. Nature Communications, 11(1), August 2020. +Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and +composing robust features with denoising autoencoders. In Proceedings of the 25th international +conference on Machine learning, pp. 1096–1103, 2008. +Liyuan Wang, Kuo Yang, Chongxuan Li, Lanqing Hong, Zhenguo Li, and Jun Zhu. Ordisco: Effective +and efficient usage of incremental unlabeled data for semi-supervised continual learning. 2021. +Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, +Jason Yosinski, and Ali Farhadi. Supermasks in superposition. In Advances in Neural Information +Processing Systems, volume 33, pp. 15173–15184. Curran Associates, Inc., 2020. +Chenshen Wu, Luis Herranz, Xialei Liu, yaxing wang, Joost van de Weijer, and Bogdan Raducanu. +Memory replay gans: Learning to generate new categories without forgetting. In Advances in +Neural Information Processing Systems, volume 31, pp. 5962–5972. Curran Associates, Inc., 2018. +Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. Unsupervised data augmentation +for consistency training. Advances in Neural Information Processing Systems, 33:6256–6268, +2020a. +Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student +improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision +and pattern recognition, pp. 10687–10698, 2020b. +Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically +expandable networks. In International Conference on Learning Representations, 2018. +Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa. Multi-task curriculum framework for open-set +semi-supervised learning. In European Conference on Computer Vision, pp. 438–454. Springer, +2020. +Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelli- +gence. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of +Proceedings of Machine Learning Research, pp. 3987–3995. PMLR, 2017. +Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European +conference on computer vision, pp. 649–666. Springer, 2016. +Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C Frank, James J DiCarlo, and +Daniel LK Yamins. Unsupervised neural network models of the ventral visual stream. Proceedings +of the National Academy of Sciences, 118(3), 2021. +13 + +Under Review +Table 6: The Running time of a single task for the URSL and baselines for CIFAR100 classification +with CIFAR10 as peripheral dataset +Method +Co2L-j +Co2L-p +GD +DM +URSL +Pretrain +(only once) +learning task +Reference +Learner +Training time(minutes) +12 +63 +6.5 +8.5 +14.5 +15.75 +12 +A +RELATED WORKS +Continual Learning +Three families of approaches have been proposed to address the issue of +forgetting in continual learning. Replay-based methods reuse samples from previous tasks either +by keeping raw samples in a limited memory buffer (Rebuffi et al., 2017; Lopez-Paz & Ranzato, +2017; Aljundi et al., 2019) or by synthesizing pseudo-samples from past classes (Shin et al., 2017; +Wu et al., 2018; van de Ven et al., 2020). Regularization-based methods aim to maintain the +network’s parameters stable across tasks by penalizing deviation from the important parameters for +the previously learned tasks (Nguyen et al., 2018; Lee et al., 2017; Zenke et al., 2017; Cha et al., +2021; Rebuffi et al., 2017). Methods based on parameter isolation dedicate different parameters to +each task by introducing new task-specific weights or masks (Rusu et al., 2016; Yoon et al., 2018; +Wortsman et al., 2020). +Methods based on parameter isolation suffer either from extensive resource usage or capacity shortage +when the number of tasks is large. Regularization-based methods are promising when the number of +tasks is small; however, as the number of tasks increases, they become more prone to catastrophic +forgetting and failure. However, replay-based methods have shown promising results in general +continual learning settings. This work can be categorized as a replay-based method. +Self-supervised Learning +Self-supervised learning methods are being explored to learn a repre- +sentation using unlabeled data such that the learned representation will be able to convey meaningful +semantic or structural information. Based on this, various ideas, such as distortion (Alexey et al., +2015; Gidaris et al., 2018), jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al., +2016), and generative modeling (Vincent et al., 2008), have been investigated. Meanwhile, con- +trastive learning has played a significant role in recent developments of self-supervised representation +learning. Contrastive learning involves learning an embedding space in which samples (e.g., crops) +from the same instance (e.g., an image) are pulled together, and samples from different instances are +pushed apart. Early work in this field incorporated some form of instance-level classification with +contrastive learning and was successful in some cases. The results of recent methods such as SimCLR +(Chen et al., 2020a), SwAV (Caron et al., 2020), and BYOL (Grill et al., 2020) are comparable to +those produced by the state-of-the-art supervised methods. +Knowledge Distilation +Knowledge distillation aims to transfer knowledge from a teacher model +to a student model without losing too much generalization power (Hinton et al., 2015). The idea +was adapted in continual learning tasks to alleviate catastrophic forgetting by keeping the network’s +responses to the samples from the old tasks unchanged while updating it with new training samples +(Shmelkov et al., 2017; Rebuffi et al., 2017; Li & Hoiem, 2016). iCaRL (Rebuffi et al., 2017) applies +a distillation loss to maintain the probability vector of the last model outputs in learning new tasks, +while UCIR (Hou et al., 2019) maximizes the cosine similarity between the embedded features of the +last model and the current model. Co2L (Cha et al., 2021) proposed a novel instance-wise relation +distillation loss for continual learning that maintain features’ relation between batch samples in the +representation space. +Semi-supervised learning +In practical scenarios, the number of labeled data is limited; therefore, +training the models using such limited labeled data leads to low performance. Due to this fact, +semi-supervised learning methods try to utilize the unlabeled data among the labeled data to achieve +better performance. There are three main categories of semi-supervised training methods: generative, +consistency regularization, and pseudo-labeling methods. A generative method can learn implicit and +14 + +Under Review +Table 7: hyperparameters search space +Parameters +Values +ES +{100, 200} +ET1 +{200, 300, 400} +ET>1 +{100, 200} +τ +{0.1, 0.5} +λ +{0.05, 0.2, 0.4} +(ηid, ηpl) +{(-4, -2), (-2, 0), (-2, 2)} +Optimizer +{SGD + momentum, Adam} +Initial Learning rate +{0.1, 0.01} +transferable features of data in order to model data distributions more accurately in supervised tasks +(Springenberg, 2015; Dumoulin et al., 2016; Li et al., 2017). Consistency regularization describes a +category of methods in which the model’s prediction should not change significantly if a realistic +perturbation is applied to the unlabeled data samples (Rasmus et al., 2015; Laine & Aila, 2016; +Tarvainen & Valpola, 2017). By pseudo-labeling, a trained model on the labeled set is utilized to +provide pseudo-labels for a portion of unlabeled data in order to produce additional training examples +that can be used as labeled samples in the training data set (Lee et al., 2013; Xie et al., 2020b; Pham +et al., 2021). UDA (Xie et al., 2020a) and FixMatch (Sohn et al., 2020) are two examples of recent +brilliant works in semi-supervised learning. UDA (Xie et al., 2020a) employs data augmentation +methods as perturbations for consistency training and encourages the consistency between predictions +on the original and augmented unsupervised samples. In the FixMatch method (Sohn et al., 2020), +consistency regularization and pseudo-labeling are combined, and cross-entropy loss is used to +calculate both supervised and unsupervised losses. +Open-set Semi-supervised Learning +Most semi-supervised learning methods assume that labeled +and unlabeled data share the same label space. Nevertheless, in the Open-set Semi-supervised +Learning setting, unlabeled data can contain categories that aren’t present in the labeled data, i.e., +outliers, which can adversely affect the performance of SSL algorithms. In UASD (Chen et al., +2020b), soft targets are produced by averaging predictions from some temporally ensembled networks, +and out-of-distribution samples are detected using a simple threshold applied to the largest prediction +score. Using a cross-modal matching strategy, Huang et al. (2021) trained a network to predict +whether a data sample matches a one-hot class label or not. By using this module, they filter out +samples that have low matching scores with all possible class labels. In Saito et al. (2021), inlier +confidence scores were calculated using one-vs-all (OVA) classifiers. Furthermore, a soft-consistency +regularization loss is also applied to enhance the OVA-classifier’s smoothness, thereby improving +outlier detection. +Out-of-Distribution Detection +Previous works in semi-supervised settings utilized an out-of- +distribution detector in order to filter relevant unlabeled data. Some methods train a K-way classifier, +assign pseudo-labels to unlabeled data and incorporate them in the training procedure. Due to the +neural networks’ overconfidence over even noisy data (Hsu et al., 2020), these methods use specific +techniques to alleviate this phenomenon. Lee et al. (2019) trained a classifier using the confidence +calibration technique in order to lower confidence over unseen data. In this work, they sampled a +bunch of random data from a massive dataset like ImageNet and applied a loss to reduce the model +confidence on them. Smith et al. (2021) used another technique called DeConf, by which they +calibrated probabilities only using in-distribution data and without needing out-of-distribution data. +In another type of method called "Learning from Positive and Unlabeled Data" (Comité et al., 1999; +Elkan & Noto, 2008; Garg et al., 2021), authors train a binary classifier that demonstrates whether +each input is in-distribution or not. Garg et al. (2021) proposed an iterative two-stage method in which +first they estimate α that determines the mixture proportion of positive data among unlabeled data. +Then, they train a classifier using estimated α. They iterated these two stages until a convergence +criterion was satisfied. +15 + +Under Review +Table 8: Chosen hyperparameters for URSL +Parameters +Values +ES +200 +ET1 +400 +ET>1 +100 +Batch size +512 +τ +0.1 +τ ′ +0.01 +τ ′′ +0.2 +λ +0.2 +γ +0.2 +(ηid, ηpl) +(-4, -2) +Optimizer +Adam +Initial Learning rate +0.01 +Minimum learning rate +1e-4 +B +DETAILS OF EXPERIMENTAL SETUPS +B.1 +DATASET DETAILS +The CIFAR10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per +class. There are 50000 training images and 10000 test images. If this dataset is used as the main +continual dataset, we randomly split it into 5 tasks with 2 classes per task. For this dataset, we used +the ratios of P = 0.01 and P = 0.1 for the supervised samples, respectively equal to 50 and 500 +samples per class. +The CIFAR100 dataset contains 100 classes with 500 training and 100 test samples for each class. +Each supervised task includes the training samples of 10 classes if this dataset is used as the main +continual dataset. In Table 2 of the main paper, we used P = 0.05 and P = 0.1 configurations for +this dataset, corresponding to 25 and 50 training samples per class. +Tiny-Imagenet is a subset of the Imagenet dataset which contains 200 classes, 100000 training +samples, and 10000 test samples. Before using the dataset, we downsize the input images from +64x64 to 32x32 in order to make all image sizes equal. We split the dataset into 10 equally sized +supervised tasks. Similar to CIFAR100, this dataset is used with P = 0.05 and P = 0.1 ratios which +is equivalent to 25 and 50 training samples per class. +Caltech256 is an object recognition dataset that contains 30607 real-world images from 257 cate- +gories. Images sizes are different from each other, and the minimum number of images per category +is 80 images. We only use this dataset in Appendix F to increase the number of datasets in Ut in order +to diversify the objects in unlabeled samples and provide a more realistic environment. +B.2 +TRAINING DETAILS +As explained in the main paper, we used two datasets for each experiment: (i) the main dataset to +construct the supervised and related unsupervised samples and (ii) the peripheral dataset to provide +the unrelated unsupervised samples. At each time step, 9000 unlabeled samples are provided from +the main dataset and 9000 unlabeled samples from the peripheral dataset. +In our experiments, the ResNet-18 architecture is used as the encoder for our method, as well as +all other baselines. In our method, starting from random initialization, the reference network is +trained for ET1 = 400 epochs at time step t = 1 to converge to a good representation. However, +for subsequent time steps, it would only be trained for ET>1 = 100 epochs. On the other hand, the +learner network is trained for ES = 200 epochs in all time steps like all the baseline methods whose +main number of epochs is 200. The mean and standard deviation of results are obtained over 3 runs. +In Table 6, the required running times to train a single epoch of different models are reported. These +results are recorded on a GeForce RTX 3080 Ti GPU. +16 + +Under Review +Table 9: After and Before Benchmarks of CIFAR100 classification with the CIFAR10 dataset as +peripheral +Setting +Only +Supervised +After +Before +Acc.(%) +15.9±0.2 +19.1±0.8 +20.0±0.1 +Table 10: Only Related and Only Unrelated Benchmarks of CIFAR100 classification with CIFAR10 +dataset as peripheral +Setting +Only +Supervised +Only +Unrelated +Only +Related +OSSCL +Acc.(%) +15.9±0.2 +19.6±0.5 +29.4±0.5 +30.4±0.2 +B.3 +TUNING THE HYPERPARAMETERS +We created a validation set for all three main datasets by selecting 10% of the training samples at +random and then performed a hyperparameter search according to Table7. Table8 shows the chosen +hyperparameters obtained either by considering the validation results or by adapting from the Co2L +paper. The strength of our proposed method is that the selected hyperparameters are invariant across +different scenarios, and we used a single configuration for all experiments. For Co2L, DM, and GD, +we used the optimal hyperparameters if the authors reported it in the original papers. In addition, +Co2L-j and Co2L-p used a similar set of hyperparameters as URSL except for the new hyperparameter +introduced in Co2L-j, where the unsupervised loss coefficient was set to 1. +B.4 +AUGMENTATIONS +To increase the diversity of training samples, following previous works (Cha et al., 2021; Chen et al., +2020a), we used the following augmentation techniques for all data: +1. RandomResizedCrop: The image is randomly cropped with the scale in [0.2, 1] and then +the cropped image will be resized to 32 × 32. +2. RandomHorizontalFlip: Each image is flipped horizontally with a probability p = 0.5, +independently from other samples. +3. ColorJitter: The brightness, contrast, saturation, and hue of each image are changed with a +probability of p = 0.8, with maximum strength [0.4, 0.4, 0.4, 0.1], respectively. +4. RandomGrayscale: Images are converted to grayscale with probability p = 0.2. +B.5 +TRAINING CLASSIFIER +At the end of the training, we trained a linear classifier on the learner network’s encoder head for +100 epochs using all memory data and the last time step labeled data TT ∪ M. We used Weighted +Random Sampler to draw mini-batches due to class imbalance in labeled data. +C +THE PERFORMANCE OF THE OOD DETECTION +In this section, we evaluate the performance of the OoD detection module in two scenarios. For the +first one, the number of related and unrelated data is 9,000 each, and for the other one, this number is +4,500. Precision and AUROC metric diagrams for the OoD detection module have been shown in +those two settings (during the time steps) in Figures 2 and 3. It can be seen that the performance of +the OoD detection module is improved over time due to the fact that it sees more classes and can +detect class boundaries more precisely. +17 + +Under Review +Figure 2: (left) AUROC of OoD detection based on the number of the main dataset seen classes +for CIFAR100 classification with CIFAR10 dataset as peripheral when the number of related and +unrelated data are 9000 (right) The precision of OoD detection at each task of CIFAR100 classification +with the CIFAR10 dataset as peripheral when the number of related and unrelated data is 9000. +Figure 3: (left) AUROC of OoD detection based on the number of the main dataset seen classes +for CIFAR100 classification with CIFAR10 dataset as peripheral when the number of related and +unrelated data are 4500 (right) The precision of OoD detection at each task of CIFAR100 classification +with the CIFAR10 dataset as peripheral when the number of related and unrelated data is 4500 +D +OTHER BENCHMARKS +In the "Other Benchmarks" section of the paper, we investigated different configurations to demon- +strate the effectiveness and robustness of the URSL model in dealing with various conditions. All +benchmarks are clarified, and the results are analyzed further below. +After and Before +In these two benchmarks, we assume that there are no unrelated samples among +unlabeled data. The difference between these two settings is the presence of classes from the main +dataset in the unlabeled data at any time step. More specifically, in the After scenario, unrelated +samples are only from classes of the current time step and future classes from subsequent time steps +are provided. However, in the Before scenario, the unlabeled data from only previous classes are +presented. This experiment was designed to show that the URSL model can benefit from a positive +forward/backward knowledge transfer from the unsupervised samples to the supervised tasks. As +shown in Table9, in the Before scenario, it seems that visiting unlabeled data of previous classes helps +to mitigate catastrophic forgetting (positive backward transfer). In contrast, in the After scenario, the +model learns a decent representation space which is beneficial for learning the new coming classes +(positive forward transfer). +Only Related and Only Unrelated +To investigate the effect of each type of unlabeled data on +the model’s functionality, we defined Only Related and Only Unrelated settings. As their names +suggest, in the former, all unlabeled data at each task only contains main dataset samples. In +contrast, in the latter, all unlabeled data are only from the peripheral dataset. In Table 10, comparing +Only Unrelated with Only Supervised shows that even unrelated samples improve performance by +enriching the representation of the reference network and providing a pivot model that prevents the +18 + +0.74 +0.72 +0.70 +0.68 +AUROC +0.66 +0.64 +0.62 +0.60 +10 +20 +30 +40 +50 +60 +70 +80 +90 +100 +Number of Accumulated + Classes0.8 +0.7 +0.6 +Precision +0.5 +0.4 +0.3 +0.2 +0.1 +0.0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +Task Number0.78 +0.76 +0.74 +AUROC +0.72 +0.70 +0.68 +0.66 +0.64 +10 +20 +30 +40 +50 +60 +70 +80 +90 +100 +Number of Accumulated +d Classes0.8 +0.7 +0.6 +Precision +0.5 +0.4 +0.3 +0.2 +0.1 +0.0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +Task NumberUnder Review +Table 11: Non-I.I.D. Benchmarks of CIFAR100 classification with CIFAR10 dataset as peripheral +Setting +Only +Supervised +Non-I.I.D. +(25 %) +Non-I.I.D. +(50 %) +OSSCL +Acc.(%) +15.9±0.2 +24.9±0.6 +27.4±0.7 +30.4±0.2 +Table 12: The effect of the number of related and unrelated samples on CIFAR100 classification with +CIFAR10 dataset as the peripheral dataset. +Related-Unrelated (samples) +1000-9000 +4500-4500 +4500-9000 +9000-4500 +9000-9000 +Accuracy(%) +21.0±1.1 +26.8±0.5 +27.0±0.6 +30.0±0.7 +30.4±0.2 +learner network from high accumulative changes during the continual learning process. Also, the +Only Related scenario demonstrates the effectiveness of existing related samples among unlabeled +data in performance, and when compared with the OSSCL setting, the effectiveness of unrelated +samples can be understood. It is worth mentioning that although most of the improvement is due +to incorporating related samples, accessing pure related unlabeled data is not usually a realistic +assumption. Instead, they are among a huge stream of unlabeled data containing unrelated samples. +Therefore, we considered both related and unrelated datasets as unlabeled samples (in the OSSCL +setting) and showed that properly employing these datasets (in URSL) further improves the results +compared with the OnlyRelated case. +Non-I.I.D. +The OSSCL scenario considers an I.I.D. assumption on the related unsupervised samples +available in the environment. To challenge this assumption, we introduced a new benchmark in which +the related data is generated only from a portion of the supervised classes at each time step. For +example, in the Non-I.I.D. (50 %) experiment, the related unsupervised dataset, only includes the +samples from half of the supervised classes, which are randomly selected at each time step. As it is +shown in Table 11, the URSL model still demonstrates a good performance even with this limited +access to the related unlabeled samples. +E +NUMBER OF RELATED AND UNRELATED SAMPLES +In this section, we investigated the effect of the number and ratio of the related and unrelated +samples among unlabeled data. In contrast to other baselines, URSL is able to utilize unrelated +unlabeled samples to boost final performance even with an imbalanced number of related and +unrelated unlabeled sets. Table 12 shows that increasing unrelated samples improves results slightly +while increasing related samples provides the model with more in-distribution samples to improve its +performance and combat catastrophic forgetting. +F +MORE COMPLICATED ENVIRONMENTS +In this section, we examine the performance of our model in even more realistic environments by +conducting more experiments in scenarios in which the unlabeled data is comprised of multiple +datasets. Table 13 shows the performance of experiments. Besides the datasets we used in the main +experiments, we also used Caltech256 (Griffin et al., 2007) in our experiments. At each experiment, +we add 9000 samples from each dataset sampled randomly to the Tt. The results suggest that our +model is robust to a variety of unlabeled data and performs well in more realistic scenarios in which +the model is exposed to plenty of unlabeled samples that most of which are not related to its target +tasks. +19 + +Under Review +Table 13: The results of using multiple datasets in Ut to stimulate a more realistic environment. +Dataset1 +Dataset2 +Dataset3 +Dataset4 +Acc.(%) +CIFAR10 +CIFAR100 +——– +——– +30.4±0.2 +CIFAR10 +CIFAR100 +Tiny-Imagenet +——– +30.4±0.6 +CIFAR10 +CIFAR100 +Caltech256 +——– +31.4±0.3 +CIFAR10 +CIFAR100 +Tiny-Imagenet +Caltech256 +31.6±0.7 +Table 14: effect of different architectures for the reference and the learner network on CIFAR100 +classification with CIFAR10 dataset as the peripheral dataset. +Reference (#parameters) +Learner (#parameters) +Accuracy(%) +ResNet-18 (11.1M) +ResNet-18 (11.1M) +31.2±1.0 +ResNet-34 (21.2M) +ResNet-18 (11.1M) +31.4±0.6 +ResNet-18 (11.1M) +WideResNet-40-2 (2.24M) +30.7±0.5 +ResNet-50 (23.5M) +WideResNet-28-10 (36.48M) +31.1±0.2 +G +THE REFERENCE AND THE LEARNER ARCHITECTURES +The authors of Co2L (Cha et al., 2021) used ResNet-18 as the feature extractor architecture of their +model. Following this design choice, we used the same architecture for both the learner and reference +networks as well as all other models and baselines in all experiments to ensure a fair comparison. In +this section, we investigate the effect of changing the architecture for the learner and the reference +networks as reported in Table 14. As expected, the model’s performance slightly increases as the +number of model parameters grows. Moreover, deep ResNet architectures compared with wide +ResNet architectures achieved better performance. +It is noteworthy that although we used a batch size of 512 in all of our experiments in other sections, +the experiments in this section are performed with a batch size of 128 to meet the memory limit +requirement, in addition to providing a fair comparison. +H +MEMORY BUFFER SELECTION ALGORITHM +Selecting the suitable samples to be stored in the memory is an active area of research in continual +learning (Bang et al., 2021; Tiwari et al., 2022; Isele & Cosgun, 2018). However, the purpose of our +research was not to focus on memory selection policies. Therefore, we have used a random policy as +it is widely adopted in many CL works (Prabhu et al., 2020; Guo et al., 2020; Balaji et al., 2020). +It is noteworthy that the segregation of unlabeled data provides more diverse data than what exists +in the memory buffer from past classes. Nevertheless, because the stored samples in the memory +buffer play an important role in segregating the unsupervised samples we conducted experiments +using different selection algorithms for memory buffer samples. In addition to the "random" selection +method, we defined three other selection strategies: +• Low-confidence: select the data on which the model has low confidence +• High-confidence: select the data on which the model has high confidence +• Rainbow (Bang et al., 2021): select from all the ranges of confidence. This algorithm +calculates a confidence score for each sample and sorts all scores; then, it selects some data +by considering the presence of samples from all ranges of model confidence. +Table 15 shows the performance of all algorithms: +As can be seen, the "Random" selection algorithm outperforms both the "High-confidence" and +"Low-confidence" selection strategies by a good margin. Moreover, the "Rainbow" achieves similar +results as the "Random" strategy. +20 + +Under Review +Table 15: effect of different algorithms for data selection for memory buffer on CIFAR100 classifica- +tion with CIFAR10 dataset as the peripheral dataset. +Algorithm +Low-confidence +high-confidence +Rainbow +Random +Accuracy(%) +67.9±0.9% +69.7±1.0% +72.5±0.6% +72.8±0.9 +72.8±0.9 +72.8±0.9% +Table 16: Comparison of URSL with URSL with full-pretraining +Method +URSL +URSL +with Pretrain +Acc.(%) +30.4±0.2 +31.3±0.7 +I +FURTHER EXPERIMENTS +I.1 +PRETRAINING THE REFERENCE NETWORK +The reference network is expected to gradually absorb unsupervised knowledge from the environment. +In this section, we designed an experiment to show the success of the reference network in continually +learning the unsupervised samples. In this experiment, the reference network is first pre-trained with +all unsupervised samples before starting the learning of the first supervised task. Next, the reference +network is frozen, and its parameters are maintained throughout the entire learning procedure. Other +training details and learning mechanisms of the learner network are the same as the original URSL +model. +Table 16 demonstrates that pretraining only slightly improves the URSL results. This suggests that +the unsupervised samples available in the environment are sufficient for the reference network to +learn a proper representation even if the data is observed continually. +I.2 +SELF-SUPERVISED METHOD +In whole experiments, we used NT-Xent loss (Sohn, 2016), a popular and straightforward contrastive +loss, which is widely used in self-supervised learning literature (Chen et al., 2020a) and achieved +remarkable performances for training the reference network. However, our model and algorithm +perform well regardless of the self-supervised loss used to train the reference network. To demonstrate +it, we compared the results of our model with experiments in which the reference network is trained +by BYOL (Grill et al., 2020), a different self-supervised algorithm from NT-Xent. Tables 17 and 18 +report the results of runs for two different scenarios. +The advantage of BYOL over SimCLR is that it does not need negative data in training. Indeed, in +our experiments, datasets have less diverse samples than huge datasets such as ImageNet; therefore, +we expect BYOL to perform better than SimCLR. The empirical results are also a confirmation of +this point. However, BYOL needs more time to obtain comparable results. +J +LIMITATIONS +In our method, there exist several limitations. First, we have to keep a minimum number of main +dataset samples from each class in the memory buffer in order to create more precise prototypes; +Second, due to the non-parallelism of the training phase of the teacher network and the student +network, the time complexity of our method is higher than other methods and baselines. Furthermore, +our model performs worse if the number of samples for each class becomes rigorously imbalanced. +There exist other limitations related to the Open-Set Semi-Supervised Continual Learning scenario. +Although this configuration seems more realistic than the previous works in literature, there may +still exist situations in which the assumptions of OSSCL do not hold. For example, an agent may +have limited access to both related and unrelated unlabeled samples in the environment. This will +21 + +Under Review +Table 17: Comparison between NT-Xent and BYOL performance on CIFAR10 classification with +Tiny-Imagenet dataset as peripheral +Main dataset +Peripheral dataset +SSL method +Accuracy (%) +Time cost (mins) +CIFAR10 +Tiny-Imagenet +SimCLR +72.8±0.6% +136 mins +CIFAR10 +Tiny-Imagenet +BYOL +73.1±1.0 +73.1±1.0 +73.1±1.0% +258 mins +Table 18: Comparison between NT-Xent and BYOL performance on CIFAR100 classification with +CIFAR10 dataset as peripheral +Main dataset +Peripheral dataset +SSL method +Accuracy (%) +Time cost (mins) +CIFAR100 +CIFAR10 +SimCLR +30.4±0.2% +219 mins +CIFAR100 +CIFAR10 +BYOL +32.3±0.8 +32.3±0.8 +32.3±0.8% +445 mins +lead to poor performance of our model since it is designed to perform in a situation where plenty of +unsupervised data exists. +K +CODE AND DATA AVAILABILITY +The source code to reproduce the results of this paper is attached to this document. In this repository, +there exists a README file containing instructions and configuration details. Moreover, the licenses +of the freely available datasets and used source codes are also available in the README file. +22 + diff --git a/UNE3T4oBgHgl3EQfagoR/content/tmp_files/load_file.txt b/UNE3T4oBgHgl3EQfagoR/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b8f510f422476c03c77161a14a1061567891a2b9 --- /dev/null +++ b/UNE3T4oBgHgl3EQfagoR/content/tmp_files/load_file.txt @@ -0,0 +1,1290 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf,len=1289 +page_content='Under Review A DISTINCT UNSUPERVISED REFERENCE MODEL FROM THE ENVIRONMENT HELPS CONTINUAL LEARNING Seyyed AmirHossein Ameli Kalkhoran1 Mohammadamin Banayeeanzade2 Mahdi Samiei1 Mahdieh Soleymani Baghshah1 1Department of Computer Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Sharif University of Technology 2Department of Electrical and Computer Engineering,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' USC Viterbi School of Engineering ABSTRACT The existing continual learning methods are mainly focused on fully-supervised scenarios and are still not able to take advantage of unlabeled data available in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Some recent works tried to investigate semi-supervised continual learning (SSCL) settings in which the unlabeled data are available, but it is only from the same distribution as the labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This assumption is still not general enough for real-world applications and restricts the utilization of unsupervised data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this work, we introduce Open-Set Semi-Supervised Continual Learning (OSSCL), a more realistic semi-supervised continual learning setting in which out- of-distribution (OoD) unlabeled samples in the environment are assumed to coexist with the in-distribution ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Under this configuration, we present a model with two distinct parts: (i) the reference network captures general-purpose and task-agnostic knowledge in the environment by using a broad spectrum of unlabeled samples, (ii) the learner network is designed to learn task-specific representations by exploiting supervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The reference model both provides a pivotal representation space and also segregates unlabeled data to exploit them more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By performing a diverse range of experiments, we show the superior performance of our model compared with other competitors and prove the effectiveness of each component of the proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 1 INTRODUCTION In a real-world continual learning (CL) problem, the agent has to learn from a non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' stream of samples with serious restrictions on storing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this case, the agent must be prone to catastrophic forgetting during training (French, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The existing CL methods are mainly focused on supervised scenarios and can be categorized into three main approaches (Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019): (i) Replay-based methods reuse samples from previous tasks either by keeping raw samples in a limited memory buffer (Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lopez-Paz & Ranzato, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019) or by generating pseudo-samples from previous classes (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' van de Ven et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (ii) Regularization-based methods aim to maintain the stability of the network across tasks by penalizing deviation from the previously learned representations or parameters (Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Li & Hoiem, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (iii) Methods based on parameter isolation dedicate distinct parameters to each task by introducing new task-specific weights or masks (Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Wortsman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Humans, as intelligent agents, are constantly in contact with tons of unsupervised data being endlessly streamed in the environment that can be used to facilitate concept learning in the brain (Zhuang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Bi & Poo, 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Hinton & Sejnowski, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' With this in mind, an important but less explored issue in many practical CL applications is how to effectively utilize a vast stream of unlabeled data along with limited labeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Recently, efforts have been made in this direction leading to the investigation of three different configurations: Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021) introduced a very restricted scenario for semi-supervised continual learning in which the unsupervised data are only from the classes which are being learned at the current time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' On the other hand, Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2019) introduced 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='04506v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='LG] 11 Jan 2023 Under Review a configuration that is "more similar to self-taught learning rather than semi-supervised learning".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In fact, they introduced a setting in which the model is exposed to plenty of labeled samples which is a necessary assumption for their model to achieve a good performance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' in addition, their model has access to a large corpse of unsupervised data in an environment that typically does not include samples related to the current CL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By adopting this idea, Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021) proposed a more realistic setting by assuming a limitation on the number of supervised samples available for the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In addition to that, they assumed the existence of a shared hidden hierarchy between the supervised and unsupervised samples, which is not necessarily true for practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this work, we will first propose a general scenario to unify the mentioned configurations into a more realistic setting called Open-Set Semi-Supervised Continual Learning (OSSCL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this scenario, the agent can observe unsupervised data from two sources: (i) Related unsupervised data, which are sampled from the same distribution as the supervised dataset, and (ii) Unrelated unsupervised data which have a different distribution from the classes of the current CL problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The in-distribution unsupervised samples can be from the classes that are being solved, have been solved at previous time steps, or are going to be solved in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Previous CL works in which unlabeled data was available alongside labeled data, mainly utilized unlabeled data by creating pseudo-labels for them using a model which is trained by labeled samples (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Those unlabeled data with their pseudo-labels were used directly in the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, due to the fact that labeled data are scarce in realistic scenarios, the pseudo-labeling process will be inaccurate and creates highly noisy labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we present a novel method to learn in the OSSCL setting which alleviates the mentioned problem and utilizes unlabeled data effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Our proposed model, which is consisted of an Unsupervised Reference network and a Supervised Learner network (URSL), can effectively absorb information by leveraging contrastive learning techniques combined with knowledge distillation methods in the representation space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' While the reference network is mainly responsible for learning general knowledge from unlabeled data, the learner network is expected to capture task-specific information from a few supervised samples using a contrastive loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In addition, the learner retains a close connection to the reference network to utilize the essential related information provided by unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At the same time, the representation space learned in the reference network can be utilized to provide an out-of-distribution detector that segregates unlabeled data to employ the filtered ones more properly in the training procedure of the learner model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In short, our main contributions are as follows: We propose OSSCL as a realistic semi-supervised continual learning scenario that an intelligent agent encounters in practical applications (Section 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We propose a novel dual-structured model that is suitable for learning in the mentioned scenario and can effectively exploit unlabeled samples (Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We show the superiority of our method in several benchmarks and different combinations of unlabeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' our model achieves state-of-the-art accuracy with a notable gap compared to the baselines and previous methods (Section 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2 PRELIMINARIES In this work, we consider the training dataset to consist of two parts;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' the supervised dataset Dsup is a sequence of T tasks {T1, T2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', TT }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At time step t, the model only has access to Tt = {(xi, yi)}Nt i=1 where xi i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ∼ P(X|yi) denotes a training sample and yi represents its corresponding label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We consider K separate classes at each task and follow the common class-incremental setting as it is shown to be the most challenging scenario for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Given a training loss ℓ and the network parameters θ, the training objective at time step t is defined as θ∗ = arg minθ 1 Nt �Nt i=1 ℓ(xi, yi, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' On the other hand, the unsupervised dataset Dunsup is a sequence of T sets {U1, U2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', UT } contain- ing only unlabeled data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We assume that Ut represents the unsupervised data available in the environment at time step t, which is accessible by the model along with Tt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Based on the OSSCL setting which is a general framework, we assume that the unsupervised dataset is composed of two parts: (i) The related part, also called the in-distribution set, is consisted of unsupervised samples 2 Under Review Figure 1: A schematic of the method and configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The unsupervised reference (URt), supervised learner (SLt), labeled data (Tt), and related and unrelated unlabeled data (Ut) at time step t are shown on the left while the OoD segregation module is shown on the right of the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' generated from the same distribution as Dsup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In order to maintain generality, we assume that this set consists not only of unsupervised samples related to the current supervised task but also of the other tasks of the CL problem that have either been observed in previous time steps or will be observed in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (ii) The unrelated data points, also called the out-of-distribution samples, are a set of unsupervised data sampled from the distribution Q, which is not necessarily the distribution from which the supervised samples have been generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In the next section, we will propose a novel method to perform in this configuration, and in Section 4, a variety of experiments are provided to show the effectiveness of our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 METHOD Learning continually from Dsup has been widely explored by the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Meanwhile, unlike deep models, humans are less hungry for supervised data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Although they observe a large volume of data during their lifetime, only a small and insignificant portion of this data is labeled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is believed that the considerable human ability to learn with a few instances is due to the rich representations learned from the large volumes of unsupervised observations (Zhuang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Bi & Poo, 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Hinton & Sejnowski, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Here, we aim to explore the benefits of using Dunsup and its impact on empowering the continual learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Specifically, we will show how Dunsup will promote representation learning in addition to providing positive forward/backward transfer in the continual learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We propose our URSL model, which is consisted of two parts: 1) The general task-agnostic reference network, which is responsible for absorbing information from unsupervised data in the environment, and 2) the learner network, which is designed to capture knowledge from a few supervised samples while it is also guided by the reference network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The notation URt and SLt are used to respectively demonstrate the reference and learner network instances at time step t (refer to Figure 1 and Algorithm 1 for an overview).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We employ a contrastive representation learning approach for training both the reference and the learner networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This approach has been proven to be a proper solution for supervised CL problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Indeed, some previous works in CL claimed that classifier heads placed on top of the representation network are the serious sources of catastrophic forgetting (Ramasesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Banayeeanzade et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021), therefore, Co2L (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) presented a supervised contrastive loss to avoid this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We utilize contrastive representation learning as a unified approach for training both the reference and the learner networks, which allows information to flow between these networks easily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Combined with knowledge distillation techniques applied in the representation space, this approach provides a convenient tool to exploit the most out of unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 OoD Detection Ut: Unsupervised Samples Related Unrelated ¥ Out-of-distribution samples O In-distribution samples (Ut) Pseudo-labeled samples (Tt) → Class Prototypes (Pt ) Environment UR1 URt UR2 Tt SL1 SL2 SLt Supervisor sup T1 T2 T: Supervised Task Bug Fish Car BirdUnder Review Our model is also equipped with an exemplar memory M to randomly store a portion of supervised samples from previous tasks (Lopez-Paz & Ranzato, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The stored samples will contribute to the training of the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' After the final time step, these samples are also used to train a classifier head on top of the representation space of the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is noteworthy that our model does not store unlabeled data in its own memory since this data is always found in abundance in the environment, and this makes our model needless of a large memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 REFERENCE NETWORK The unsupervised reference network URt : X → Rd is a general-purpose feature extractor responsi- ble for encoding all kinds of unsupervised information available in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The network is composed of an encoder f, and a projector g, responsible for embedding input x in the representation space by z = (f ◦ g)θt(x) where z is on the unit d-dimensional Euclidean sphere and θt represents the model parameters at time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Considering a batch B ⊆ Ut with size N, the SimCLR (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a) loss function used for training the network can be written as: hi,j = − log exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zj/τ) 2N � k=1 1[k̸=i] exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zk/τ) , Lunsup(θt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' τ) = 1 2N N � k=1 (h2k−1,2k + h2k,2k−1), (1) where ˜z2i and ˜z2i−1 are the representations of two different augmentations of the same image xi ∈ B and τ is the temperature hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 SEGREGATING UNSUPERVISED SAMPLES In this section, we show how to segregate unlabeled samples by employing the reference network and supervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Although unsupervised samples can play an important role in both learning the representation space and controlling changes in this space through time, naive approaches to incorporating these samples into the training of the learner network can lead to inferior performance due to the existence of unrelated samples among unlabeled ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we will first explain the OoD detection method, which is designed to segregate unlabeled data and incorporate them more properly in the continual learning process of the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' To efficiently segregate unsupervised data, we employ a prototypical-based OoD detection method (Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) in the representation space of the reference network using samples in Tt ∪ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is noteworthy that the representation space of the reference network is chosen for OoD detection since it provides better sample discrimination than any other representation space obtained by training over a small number of labeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Additionally, this approach eliminates the need to train another network specialized in OoD detection in contrast to the previous works (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Saito et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At time step t, our OoD method creates Pt = � Pt 1, Pt 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' , Pt K×t � , a set of K × t prototypes representing the centroids of observed classes so far, which is extracted using the labeled data available in Tt ∪ M: Pt i = ψ � 1 |A| � (xj ,yj )∈Tt∪M 1[yj =i] � (xj,yj)∈Tt∪M 1[yj=i] � a∈A(f ◦ g)θt(a(xj)) � , (2) where A is a set of augmentations meant to form different views of a real image, and ψ is the operator that projects vectors into the unit d-dimensional sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We also define the score operator S (Pt, z) = max i c (Pt i , z) where c denotes the cosine similarity measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This operator takes prototypes in addition to a sample in the representation space and calculates the score of its most probable assignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' With this in mind, we consider St l as the scores of the labeled data obtained by passing Tt∪M through the S (Pt, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=') operator, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' : St l = {S(Pt, (f ◦g)θt(x))|x ∈ Tt∪M} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By considering ηid as a hyperparameter, we define a threshold τid = mean (St l ) + ηidvar (St l ) on the scores of unlabeled data to specify in-distribution samples as ˆUt = {x|x ∈ Ut, S(Pt, x) > τid}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Furthermore, we assign pseudo-labels to the unsupervised samples on which we have superior confidence by defining a higher threshold τpl = mean (St l ) + ηplvar (St l ), with the hyperparameter ηpl, and prepare 4 Under Review pseudo-labeled samples as ˆTt = {(x, ˆy)|x ∈ Ut, S(Pt, x) > τpl, ˆy = arg maxi c(Pt i , x)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In other words, an unsupervised sample with a similarity value higher than τpl to a class prototype is pseudo- labeled to that class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, to reduce pseudo-labeling noise, we do not utilize pseudo-labels directly during the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Those pseudo-labels are used to identify whether this unlabeled data is from past classes or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Samples of ˆTt are mainly used to compensate for the small number of supervised samples in the memory, as further explained in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We provide a detailed investigation of the performance of the OoD module in Appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 LEARNER NETWORK Similar to the reference network, the learner network SLt : X → Rd is a feature extractor with the form z = (f ◦ g)ϕt(x) where ϕt denotes the model parameters at time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The training of the learner network is done using three mechanisms: Supervised Training: Following Co2L (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021), we will use an asymmetric supervised version of the contrastive loss function to train the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By considering a supervised batch B = {(xi, yi)}N i=1, which is sampled from Tt ∪ M ∪ ˆTt, and applying an augmentation policy to form two different views of real samples, we can write the supervised contrastive loss as follow: Lsup(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' τ) = 1 N N � i=1 −1[yi∈Ot] |ζi| � j∈ζi log exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zj/τ) N � k=1 1[k̸=i] exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zk/τ) , (3) where Ot is the new classes of the current time step t, and ζi are the other samples of the current batch with the same label yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The existence of ˆTt is crucial for learning a proper representation since only a small amount of labeled data is available during continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In fact, Co2L intends to prevent overfitting to the small number of past task samples stored in the memory by proposing the asymmetric supervised contrastive loss that utilizes samples from the memory only as negative samples (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, when the labeled data are limited, even employing the past samples in M, as negative samples, still may cause overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we enrich M by ˆTt to diversify the samples from previous classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Knowledge Transfer Through Time: The loss function in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 allows the model to discriminate between new and previous classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, it is not sufficient to maintain the discrimination power of the learner network among previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore to avoid catastrophic forgetting, at each time step t, we use an instance-wise relation distillation (IRD) loss to transfer knowledge from the previous time step to the current model (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This self-distillation technique, which is also compatible with the contrastive representation learning approach, retains the old knowledge by maintaining the samples’ similarity in the representation space of the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' To this end, first, we sample a batch B from Tt ∪ M ∪ ˆTt, augment each sample xi twice to create ˜x2i−1, ˜x2i, and then calculate the instance-wise similarity vector as: p (˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕ, τ) = [pi,0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' , pi,i−1, pi,i+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' , pi,2N] where pi,j = exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zj/τ) 2N � k=1 1[k̸=i] exp(˜zi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='˜zk/τ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (4) By computing probabilities for both SLt and SLt−1, we can write time distillation loss as: LT D(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt−1, τ ′, τ ′′) = 2N � i=1 −p(˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt−1, τ ′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' log p(˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt, τ ′′), (5) where τ ′ and τ ′′ represent the distillation-specific temperatures for the previous model and the current model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Knowledge Transfer from Reference: The reference network encounters numerous unsupervised samples throughout its training and is expected to learn a rich representation space using the objective introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This representation is used as guidance for the learner network, and the knowledge can be transferred to the learner network using an IRD loss similar to the Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 5: 5 Under Review LKD(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' θt, τ ′, τ ′′) = 2N � i=1 −p (˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' θt, τ ′) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' log p (˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt, τ ′′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (6) This distillation is applied to the learner network based on the samples in Tt ∪ M ∪ ˆUt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is noteworthy that this distillation, rather than using all of the unsupervised samples in Ut, only uses the unsupervised samples ˆUt, which seems to be related to the training of the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 THE URSL ALGORITHM In summary, the model receives two sets of samples at each time step: Tt and Ut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The reference network is trained on Ut using the self-supervised loss function introduced in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Then, an OoD detection and a pseudo-labeling technique introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2, are used to segregate unsupervised samples in Ut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Finally, the learner network is trained based on the weighted aggregation of three loss functions introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 by defining γ and λ as hyperparameters: Ls(ϕt) = Lsup(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' τ) + γLT D(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt−1, τ ′, τ ′′) + λLKD(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' θt, τ ′, τ ′′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (7) Algorithm 1 URSL: Unsupervised Reference and Supervised Learner Require: A supervised dataset Dsup = {Tt}T t=1 and an unsupervised dataset Dunsup = {Ut}T t=1 1: initialize UR0 and SL0 respectively with random parameters θ0 and ϕ0 2: for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', T do 3: Initialize θt ← θt−1 and ϕt ← ϕt−1 4: Update θt based on Ut to minimize Lunsup(θt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' τ) (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 1) 5: Extract Pt using Tt ∪ M (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2) 6: Compute St l from Tt ∪ M (Section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2) 7: Compute τid ← mean (St l ) + ηidvar (St l ) and τpl ← mean (St l ) + ηplvar (St l ) 8: Prepare ˆTt and ˆUt based on τid, τpl, and scores of Ut (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2) 9: while not done do 10: Sample a batch B from Tt ∪ M ∪ ˆTt 11: Compute Ls ← Lsup(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' τ) based on B (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3) 12: if t > 1 then 13: Update Ls ← Ls + γLT D(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ϕt−1, τ ′, τ ′′) based on B (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 5) 14: Update Ls ← Ls + λLKD(ϕt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' θt, τ ′, τ ′′) based on a batch from Tt ∪ M ∪ ˆUt (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 6) 15: Update ϕt ← ϕt − α∇ϕLs 16: Update M such that the number of samples for each class is the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 17: Train the classifier head using TT ∪ M 4 EXPERIMENTS Benchmark Scenario: To demonstrate the effectiveness of our method, we have performed several experiments in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We use two datasets for each experiment: the main and the peripheral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' A small portion of the main dataset, which is determined by P, is selected as supervised data, the rest is considered as related unsupervised data, and all samples of the peripheral dataset are considered as (probably) unrelated unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At each time step, 9000 examples from each unsupervised dataset are randomly sampled, shuffled together, and fed into the model as unsupervised data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Appendix F, we provide the results of experiments in which the number of datasets inside Ut is greater than two datasets, and the environment is even more realistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The hyperparameters of our model are not dependent on the experiment configuration, and a general and consistent solution for all conditions is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We conducted a wide range of experiments to demonstrate the model’s robustness in various scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In our experiments, we used the CIFAR10, CIFAR100 (Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2009), and Tiny-ImageNet (Le & Yang, 2015) datasets as the main or peripheral datasets, which are commonly used datasets in the open-set semi-supervised learning literature (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' moreover, the settings of our 6 Under Review Table 1: Accuracy of different models on the CIFAR10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Setting Unsupervised Method Dataset Co2L Co22L-j Co2L-p GD DM URSL P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='01 CIFAR100 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 |M| = 50 Tiny-Imagenet 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 CIFAR100 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 |M| = 200 Tiny-Imagenet 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 Co2L GEM iCaRL P = 1 None 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 |M| = 200 Table 2: Accuracy of different models on the CIFAR100 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Setting Unsupervised Method Dataset Co2L Co2L-j Co2L-p GD DM URSL P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05 CIFAR10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 |M| = 500 Tiny-Imagenet 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 CIFAR10 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 |M| = 1000 Tiny-Imagenet 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 Co2L GEM iCaRL P = 1 None 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 |M| = 1000 experiments are known as the "cross dataset" setting in the open-set semi-supervised literature (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We have utilized ResNet-18 architecture as the backbone of both networks with a two-layer MLP on its head as the projector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The input images for the model are 32 x 32 pixels in size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Additionally, we use the notation |M| to show the size of the supervised memory introduced in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Further experimental setups and details are provided in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Baselines: Co2L (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) can be seen as a simplified version of URSL in which there is no reference network and no means for using unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we propose a modified version of Co2L, Co2L-j, in which the model is trained jointly by employing both a supervised and an unsupervised contrastive loss on the supervised and unsupervised data, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In another baseline, Co2L-p, we only pre-train the model with unsupervised data available in the first time step and ignore the unsupervised data in the subsequent steps to avoid possible conflict with the supervised loss during continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' There are also two other baselines in the prior works that seem consistent with the OSSCL setting due to the presence of an OoD detection module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' GD (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019) trained an OoD module to recognize unlabeled data from previous classes among the entire unlabeled dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This in-distribution data was only used to combat catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' DM (Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) mainly changed GD setting through defining some policies over unlabeled data by using superclasses of the CIFAR100 and using the FixMatch method (Sohn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' On the other side, we also report results of fully supervised continual learning for two popular continual learning models, GEM and iCaRL, and also the state-of-the-art Co2L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' These methods have access to all samples of the related dataset as labeled ones during continual learning but cannot use unlabeled samples from any source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 7 Under Review Table 3: Accuracy of different models on the Tiny-Imagenet dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Setting Unsupervised Method Dataset Co2L Co2L-j Co2L-p GD DM URSL P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05 CIFAR10 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='54±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='02 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 |M| = 1000 CIFAR100 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 CIFAR10 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 |M| = 2000 CIFAR100 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 Co2L GEM iCaRL P = 1 None 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 |M| = 2000 Table 4: Ablation of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 7 on CIFAR100 classification with CIFAR10 dataset as peripheral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Version URSL w/o Lsup URSL w/o LT D URSL w/o LKD Only Lsup Only LKD URSL Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 RESULTS Tables 1, 2, and 3 show the classification accuracy at the final time step when the main datasets are selected as CIFAR10, CIFAR100, and Tiny-ImageNet, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In almost all the experiments, URSL outperforms all other baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' There are two reasons for the superiority of URSL over GD and DM: (i) Unlike GD and DM, which train OoD detection with a small number of labeled samples, OoD detection of URSL is based on the representation of the reference network, which is trained with a large amount of unlabeled data and has high discrimination power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (ii) GD only uses these unlabeled data to solve the forgetting, while URSL uses those to transfer a rich representation from the reference network to the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Although Co2L-p and Co2L-j improved Co2L, URSL outperformed them in all scenarios, showing the effectiveness of the proposed ideas compared with the naive approaches for incorporating unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Furthermore, URSL achieved comparable or even better results than state-of-the-art full-supervised CL methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This phenomenon suggests that URSL can benefit from unsupervised samples to mitigate the forgetting of previous classes or to learn a general representation that is proper for learning the classes that will be observed in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We provide multiple benchmarks in Appendix D to show the robustness and power of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For instance, After and Before scenarios, in which the unlabeled related samples are respectively restricted to the future and past classes of the main dataset, prove that our method has positive forward and backward transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In a Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' scenario, we examined our method in an environment in which only a fraction of the classes of the main dataset are present in Ut at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Additionally, Appendix E indicates that our method can achieve remarkable performance even in situations in which the ratio of the number of related unsupervised samples to the number of unrelated unsupervised samples is very low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 ABLATION STUDIES In this section, we conducted experiments to demonstrate the contribution of different components of the model to the final performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' To that end, we have selected CIFAR100 as the main dataset, CIFAR10 as the peripheral dataset, P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05, and |M| = 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table 4 indicates the model’s performance in the experiments created by ablations over the losses of the model presented in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 7: Effect of Lsup: Lsup induces the representation of the learner network to discriminate between classes directly by using supervised contrastive loss and labels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' therefore, As the results suggest, this loss is important and contributes to the performance of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Adding Lsup to the URSL 8 Under Review Table 5: Ablation of OoD on CIFAR100 classification with CIFAR10 dataset as peripheral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Experiment Variant 1 Variant 2 Variant 3 Variant 4 Data Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 and 5 Tt ∪ M Tt ∪ M Tt ∪ M ∪ ˆTt Tt ∪ M ∪ ˆTt Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 6 Tt ∪ M ∪ Ut Tt ∪ M ∪ ˆUt Tt ∪ M ∪ Ut Tt ∪ M ∪ ˆUt Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 w/o Lsup version is increased the performance by 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Moreover, although LKD provides great discrimination for the learner network and achieves 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2% accuracy, adding Lsup to this version still enhances the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Effect of LT D: The role of LT D is to transfer previously learned knowledge and reduce forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The performance of the model is increased from 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1% to 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9% only by adding LT D to Only Lsup version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Furthermore, Although LKD reduces forgetting in another way, adding LT D to URSL without LT D increased performance from 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4% to 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' All mentioned comparisons indicate that this loss can effectively help the model to avoid forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Effect of LKD: LKD is a new way to utilize unlabeled samples of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This loss intends to transfer the reference network’s rich knowledge about the environment to the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As it is shown, using this loss alone to train the learner network achieves great performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In addition, adding LKD to Only Lsup increases performance from 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1% to 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Although LKD and LT D both reduce forgetting in different ways and have overlap in their function, adding LKD to URSL without LKD still boosts the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is worth mentioning that performance reduction from Only LKD version to URSL without Lsup version is because of equality of LKD and LT D coefficients, λ, and γ, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The high ratio of γ λ prevents the model from learning new tasks and reduces the plasticity of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The next study, provided in Table 5, indicates the importance of the segregation module in providing ˆTt and ˆUt to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In the below paragraphs, the results are discussed: Effect of ˆTt: Due to the fact that there exists a variety of images that differed from the current classes among unlabeled data, we are not able to use all unlabeled data naively in Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, In experiments, we investigate the effect of adding ˆTt to the data of Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The results indicate the importance of adding ˆTt in preventing the overfitting caused by the limited number of data from the past classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Remarkable boost in the performance of Variant 1, and Variant 3 can be seen by comparing them with Variant 2, and Variant 4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Effect of ˆUt: Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 6 is designed to transfer the rich representation of the reference network to the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The results show that adding Ut to this loss naively and without segregation leads to the transfer of irrelevant knowledge to the learner network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' That unrelated transferred knowledge to tasks prevents the model from learning a discriminative representation for the target tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Segregation of Ut boosts the performance of Variant 1, and Variant 3 by 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8% and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 5 CONCLUSION In this paper, we present OSSCL, a novel setting for continual learning which is more realistic than previously studied settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The setting assumes that the agent has access to a large number of unsupervised data in the environment, some of which are relevant to tasks due to the similarity between surroundings and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As a possible solution for this setting, we presented a novel model, consisting of a supervised learner and an unsupervised reference network, to effectively utilize both supervised and unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The learner network benefits from three loss functions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' the supervised loss function, which is formed based on limited supervised samples and segregated unsupervised samples, knowledge distillation through time, and representational guidance from the reference network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' URSL has outperformed other state-of-the-art continual learning models with a 9 Under Review considerable margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The experiments and ablation studies demonstrate the superiority of the model and the effectiveness of each of its components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ACKNOWLEDGMENTS We thank Fahimeh Hosseini (Sharif University of Technology) for her helpful comments and for designing some parts of the model’s figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' REFERENCES Dosovitskiy Alexey, Philipp Fischer, Jost Tobias, Martin Riedmiller Springenberg, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Discriminative unsupervised feature learning with exemplar convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Pattern Analysis and Machine Intelligence, 99, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gradient based sample selection for online continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yogesh Balaji, Mehrdad Farajtabar, Dong Yin, Alex Mott, and Ang Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The effectiveness of memory replay in large scale continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='02418, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mohammadamin Banayeeanzade, Rasoul Mirzaiezadeh, Hosein Hasani, and Mahdieh Soleymani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Generative vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' discriminative: Rethinking the meta-continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 21592–21604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rainbow memory: Continual learning with a memory of diverse samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 8218–8227, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Guo-qiang Bi and Mu-ming Poo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Synaptic modifications in cultured hippocampal neurons: depen- dence on spike timing, synaptic strength, and postsynaptic cell type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Journal of neuroscience, 18 (24):10464–10472, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised learning of visual features by contrasting cluster assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:9912–9924, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Hyuntak Cha, Jaeho Lee, and Jinwoo Shin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Co2l: Contrastive continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 9516–9525, October 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' A simple framework for contrastive learning of visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05709, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yanbei Chen, Xiatian Zhu, Wei Li, and Shaogang Gong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Semi-supervised learning under class distribution mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3569–3576, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Francesco De Comité, François Denis, Rémi Gilleron, and Fabien Letouzey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Positive and unlabeled examples help learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In International conference on algorithmic learning theory, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 219–230.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Adversarially learned inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='00704, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Charles Elkan and Keith Noto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Learning classifiers from only positive and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 213–220, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Robert M French.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Catastrophic forgetting in connectionist networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Trends in cognitive sciences, 3 (4):128–135, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 10 Under Review Saurabh Garg, Yifan Wu, Alexander J Smola, Sivaraman Balakrishnan, and Zachary Lipton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mixture proportion estimation and pu learning: A modern approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:8532–8544, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Spyros Gidaris, Praveer Singh, and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised representation learning by predicting image rotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='07728, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gregory Griffin, Alex Holub, and Pietro Perona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Caltech-256 object category dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Bootstrap your own latent-a new approach to self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 33:21271–21284, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yunhui Guo, Mingrui Liu, Tianbao Yang, and Tajana Rosing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Improved schemes for episodic memory- based lifelong learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:1023–1035, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Geoffrey Hinton and Terrence J Sejnowski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised learning: foundations of neural computa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' MIT press, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Distilling the knowledge in a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:1503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='02531, 2(7), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Learning a unified classifier incrementally via rebalancing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 831–839, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Generalized odin: Detecting out-of- distribution image without learning from out-of-distribution data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 10951–10960, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Junkai Huang, Chaowei Fang, Weikai Chen, Zhenhua Chai, Xiaolin Wei, Pengxu Wei, Liang Lin, and Guanbin Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Trash to treasure: Harvesting ood data with cross-modal matching for open-set semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 8310–8319, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' David Isele and Akansel Cosgun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Selective experience replay for lifelong learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Alex Krizhevsky, Geoffrey Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Samuli Laine and Timo Aila.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Temporal ensembling for semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='02242, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Ya Le and Xuan Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tiny imagenet visual recognition challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' CS 231N, 7(7):3, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Dong-Hyun Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Workshop on challenges in representation learning, ICML, volume 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 896, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Overcoming catastrophic forgetting with unlabeled data in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Overcoming catastrophic forgetting by incremental moment matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Chongxuan Li, Taufik Xu, Jun Zhu, and Bo Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Triple generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Zhizhong Li and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Learning without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 614–629.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=" 11 Under Review David Lopez-Paz and Marc' Aurelio Ranzato." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gradient episodic memory for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 30, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 6467–6476.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Cuong V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Nguyen, Yingzhen Li, Thang D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Bui, and Richard E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Turner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Variational continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mehdi Noroozi and Paolo Favaro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised learning of visual representations by solving jigsaw puzzles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In European conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 69–84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' German I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Parisi, Ronald Kemker, Jose L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Part, Christopher Kanan, and Stefan Wermter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Continual lifelong learning with neural networks: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Neural Networks, 113:54–71, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ISSN 0893-6080.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Jongjin Park, Sukmin Yun, Jongheon Jeong, and Jinwoo Shin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Opencos: Contrastive semi-supervised learning for handling open-set unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' CoRR, abs/2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='08943, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Hieu Pham, Zihang Dai, Qizhe Xie, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Meta pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 11557–11568, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Ameya Prabhu, Philip HS Torr, and Puneet K Dokania.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gdumb: A simple approach that questions our progress in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In European conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 524–540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Vinay Venkatesh Ramasesh, Ethan Dyer, and Maithra Raghu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Anatomy of catastrophic forgetting: Hidden representations and task semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In International Conference on Learning Representa- tions, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Semi-supervised learning with ladder networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rebuffi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Kolesnikov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Sperl, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lampert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' icarl: Incremental classifier and representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 5533–5542, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rusu, Neil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Progressive neural networks, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Kuniaki Saito, Donghyun Kim, and Kate Saenko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Openmatch: Open-set semi-supervised learning with open-set consistency regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34: 25956–25967, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Continual learning with deep generative replay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Incremental learning of object detectors without catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3400–3409, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' James Smith, Jonathan Balloch, Yen-Chang Hsu, and Zsolt Kira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Memory-efficient semi-supervised continual learning: The world is its own replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='09536, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Accepted for publication at IJCNN 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Kihyuk Sohn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Improved deep metric learning with multi-class n-pair loss objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Do- gus Cubuk, Alexey Kurakin, and Chun-Liang Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Fixmatch: Simplifying semi-supervised learning with consistency and confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 33:596–608, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Jost Tobias Springenberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised and semi-supervised learning with categorical generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' arXiv preprint arXiv:1511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='06390, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 12 Under Review Antti Tarvainen and Harri Valpola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rishabh Tiwari, Krishnateja Killamsetty, Rishabh Iyer, and Pradeep Shenoy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gcr: Gradient coreset based replay buffer selection for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 99–108, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gido M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' van de Ven, Hava T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Siegelmann, and Andreas S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tolias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Brain-inspired replay for continual learning with artificial neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Nature Communications, 11(1), August 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Extracting and composing robust features with denoising autoencoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the 25th international conference on Machine learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 1096–1103, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Liyuan Wang, Kuo Yang, Chongxuan Li, Lanqing Hong, Zhenguo Li, and Jun Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Mitchell Wortsman, Vivek Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, Jason Yosinski, and Ali Farhadi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Supermasks in superposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 15173–15184.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Chenshen Wu, Luis Herranz, Xialei Liu, yaxing wang, Joost van de Weijer, and Bogdan Raducanu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Memory replay gans: Learning to generate new categories without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, volume 31, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 5962–5972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised data augmentation for consistency training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:6256–6268, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Self-training with noisy student improves imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 10687–10698, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lifelong learning with dynamically expandable networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Qing Yu, Daiki Ikami, Go Irie, and Kiyoharu Aizawa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Multi-task curriculum framework for open-set semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In European Conference on Computer Vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 438–454.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Friedemann Zenke, Ben Poole, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Continual learning through synaptic intelli- gence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3987–3995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Richard Zhang, Phillip Isola, and Alexei A Efros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Colorful image colorization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In European conference on computer vision, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 649–666.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Chengxu Zhuang, Siming Yan, Aran Nayebi, Martin Schrimpf, Michael C Frank, James J DiCarlo, and Daniel LK Yamins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Unsupervised neural network models of the ventral visual stream.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 118(3), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 13 Under Review Table 6: The Running time of a single task for the URSL and baselines for CIFAR100 classification with CIFAR10 as peripheral dataset Method Co2L-j Co2L-p GD DM URSL Pretrain (only once) learning task Reference Learner Training time(minutes) 12 63 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='75 12 A RELATED WORKS Continual Learning Three families of approaches have been proposed to address the issue of forgetting in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Replay-based methods reuse samples from previous tasks either by keeping raw samples in a limited memory buffer (Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lopez-Paz & Ranzato, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019) or by synthesizing pseudo-samples from past classes (Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' van de Ven et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Regularization-based methods aim to maintain the network’s parameters stable across tasks by penalizing deviation from the important parameters for the previously learned tasks (Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Zenke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Methods based on parameter isolation dedicate different parameters to each task by introducing new task-specific weights or masks (Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Yoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Wortsman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Methods based on parameter isolation suffer either from extensive resource usage or capacity shortage when the number of tasks is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Regularization-based methods are promising when the number of tasks is small;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' however, as the number of tasks increases, they become more prone to catastrophic forgetting and failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, replay-based methods have shown promising results in general continual learning settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This work can be categorized as a replay-based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Self-supervised Learning Self-supervised learning methods are being explored to learn a repre- sentation using unlabeled data such that the learned representation will be able to convey meaningful semantic or structural information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Based on this, various ideas, such as distortion (Alexey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Gidaris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2018), jigsaw puzzles (Noroozi & Favaro, 2016), colorization (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2016), and generative modeling (Vincent et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2008), have been investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Meanwhile, con- trastive learning has played a significant role in recent developments of self-supervised representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Contrastive learning involves learning an embedding space in which samples (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', crops) from the same instance (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', an image) are pulled together, and samples from different instances are pushed apart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Early work in this field incorporated some form of instance-level classification with contrastive learning and was successful in some cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The results of recent methods such as SimCLR (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a), SwAV (Caron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020), and BYOL (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020) are comparable to those produced by the state-of-the-art supervised methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Knowledge Distilation Knowledge distillation aims to transfer knowledge from a teacher model to a student model without losing too much generalization power (Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The idea was adapted in continual learning tasks to alleviate catastrophic forgetting by keeping the network’s responses to the samples from the old tasks unchanged while updating it with new training samples (Shmelkov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Li & Hoiem, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' iCaRL (Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017) applies a distillation loss to maintain the probability vector of the last model outputs in learning new tasks, while UCIR (Hou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2019) maximizes the cosine similarity between the embedded features of the last model and the current model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Co2L (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) proposed a novel instance-wise relation distillation loss for continual learning that maintain features’ relation between batch samples in the representation space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Semi-supervised learning In practical scenarios, the number of labeled data is limited;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' therefore, training the models using such limited labeled data leads to low performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Due to this fact, semi-supervised learning methods try to utilize the unlabeled data among the labeled data to achieve better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' There are three main categories of semi-supervised training methods: generative, consistency regularization, and pseudo-labeling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' A generative method can learn implicit and 14 Under Review Table 7: hyperparameters search space Parameters Values ES {100, 200} ET1 {200, 300, 400} ET>1 {100, 200} τ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5} λ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4} (ηid, ηpl) {(-4, -2), (-2, 0), (-2, 2)} Optimizer {SGD + momentum, Adam} Initial Learning rate {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='01} transferable features of data in order to model data distributions more accurately in supervised tasks (Springenberg, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Dumoulin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Consistency regularization describes a category of methods in which the model’s prediction should not change significantly if a realistic perturbation is applied to the unlabeled data samples (Rasmus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Laine & Aila, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tarvainen & Valpola, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By pseudo-labeling, a trained model on the labeled set is utilized to provide pseudo-labels for a portion of unlabeled data in order to produce additional training examples that can be used as labeled samples in the training data set (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' UDA (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a) and FixMatch (Sohn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020) are two examples of recent brilliant works in semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' UDA (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a) employs data augmentation methods as perturbations for consistency training and encourages the consistency between predictions on the original and augmented unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In the FixMatch method (Sohn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020), consistency regularization and pseudo-labeling are combined, and cross-entropy loss is used to calculate both supervised and unsupervised losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Open-set Semi-supervised Learning Most semi-supervised learning methods assume that labeled and unlabeled data share the same label space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Nevertheless, in the Open-set Semi-supervised Learning setting, unlabeled data can contain categories that aren’t present in the labeled data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', outliers, which can adversely affect the performance of SSL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In UASD (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020b), soft targets are produced by averaging predictions from some temporally ensembled networks, and out-of-distribution samples are detected using a simple threshold applied to the largest prediction score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Using a cross-modal matching strategy, Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021) trained a network to predict whether a data sample matches a one-hot class label or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' By using this module, they filter out samples that have low matching scores with all possible class labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Saito et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021), inlier confidence scores were calculated using one-vs-all (OVA) classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Furthermore, a soft-consistency regularization loss is also applied to enhance the OVA-classifier’s smoothness, thereby improving outlier detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Out-of-Distribution Detection Previous works in semi-supervised settings utilized an out-of- distribution detector in order to filter relevant unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Some methods train a K-way classifier, assign pseudo-labels to unlabeled data and incorporate them in the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Due to the neural networks’ overconfidence over even noisy data (Hsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020), these methods use specific techniques to alleviate this phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2019) trained a classifier using the confidence calibration technique in order to lower confidence over unseen data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this work, they sampled a bunch of random data from a massive dataset like ImageNet and applied a loss to reduce the model confidence on them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021) used another technique called DeConf, by which they calibrated probabilities only using in-distribution data and without needing out-of-distribution data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In another type of method called "Learning from Positive and Unlabeled Data" (Comité et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Elkan & Noto, 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021), authors train a binary classifier that demonstrates whether each input is in-distribution or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (2021) proposed an iterative two-stage method in which first they estimate α that determines the mixture proportion of positive data among unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Then, they train a classifier using estimated α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' They iterated these two stages until a convergence criterion was satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 15 Under Review Table 8: Chosen hyperparameters for URSL Parameters Values ES 200 ET1 400 ET>1 100 Batch size 512 τ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 τ ′ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='01 τ ′′ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 λ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 (ηid, ηpl) (-4, -2) Optimizer Adam Initial Learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='01 Minimum learning rate 1e-4 B DETAILS OF EXPERIMENTAL SETUPS B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 DATASET DETAILS The CIFAR10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' There are 50000 training images and 10000 test images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' If this dataset is used as the main continual dataset, we randomly split it into 5 tasks with 2 classes per task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For this dataset, we used the ratios of P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='01 and P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 for the supervised samples, respectively equal to 50 and 500 samples per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The CIFAR100 dataset contains 100 classes with 500 training and 100 test samples for each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Each supervised task includes the training samples of 10 classes if this dataset is used as the main continual dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Table 2 of the main paper, we used P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05 and P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 configurations for this dataset, corresponding to 25 and 50 training samples per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tiny-Imagenet is a subset of the Imagenet dataset which contains 200 classes, 100000 training samples, and 10000 test samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Before using the dataset, we downsize the input images from 64x64 to 32x32 in order to make all image sizes equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We split the dataset into 10 equally sized supervised tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Similar to CIFAR100, this dataset is used with P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='05 and P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 ratios which is equivalent to 25 and 50 training samples per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Caltech256 is an object recognition dataset that contains 30607 real-world images from 257 cate- gories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Images sizes are different from each other, and the minimum number of images per category is 80 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We only use this dataset in Appendix F to increase the number of datasets in Ut in order to diversify the objects in unlabeled samples and provide a more realistic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 TRAINING DETAILS As explained in the main paper, we used two datasets for each experiment: (i) the main dataset to construct the supervised and related unsupervised samples and (ii) the peripheral dataset to provide the unrelated unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At each time step, 9000 unlabeled samples are provided from the main dataset and 9000 unlabeled samples from the peripheral dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In our experiments, the ResNet-18 architecture is used as the encoder for our method, as well as all other baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In our method, starting from random initialization, the reference network is trained for ET1 = 400 epochs at time step t = 1 to converge to a good representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, for subsequent time steps, it would only be trained for ET>1 = 100 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' On the other hand, the learner network is trained for ES = 200 epochs in all time steps like all the baseline methods whose main number of epochs is 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The mean and standard deviation of results are obtained over 3 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Table 6, the required running times to train a single epoch of different models are reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' These results are recorded on a GeForce RTX 3080 Ti GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 16 Under Review Table 9: After and Before Benchmarks of CIFAR100 classification with the CIFAR10 dataset as peripheral Setting Only Supervised After Before Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 Table 10: Only Related and Only Unrelated Benchmarks of CIFAR100 classification with CIFAR10 dataset as peripheral Setting Only Supervised Only Unrelated Only Related OSSCL Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 TUNING THE HYPERPARAMETERS We created a validation set for all three main datasets by selecting 10% of the training samples at random and then performed a hyperparameter search according to Table7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table8 shows the chosen hyperparameters obtained either by considering the validation results or by adapting from the Co2L paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The strength of our proposed method is that the selected hyperparameters are invariant across different scenarios, and we used a single configuration for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For Co2L, DM, and GD, we used the optimal hyperparameters if the authors reported it in the original papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In addition, Co2L-j and Co2L-p used a similar set of hyperparameters as URSL except for the new hyperparameter introduced in Co2L-j, where the unsupervised loss coefficient was set to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 AUGMENTATIONS To increase the diversity of training samples, following previous works (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a), we used the following augmentation techniques for all data: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' RandomResizedCrop: The image is randomly cropped with the scale in [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2, 1] and then the cropped image will be resized to 32 × 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' RandomHorizontalFlip: Each image is flipped horizontally with a probability p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5, independently from other samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' ColorJitter: The brightness, contrast, saturation, and hue of each image are changed with a probability of p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8, with maximum strength [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1], respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' RandomGrayscale: Images are converted to grayscale with probability p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 TRAINING CLASSIFIER At the end of the training, we trained a linear classifier on the learner network’s encoder head for 100 epochs using all memory data and the last time step labeled data TT ∪ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' We used Weighted Random Sampler to draw mini-batches due to class imbalance in labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' C THE PERFORMANCE OF THE OOD DETECTION In this section, we evaluate the performance of the OoD detection module in two scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For the first one, the number of related and unrelated data is 9,000 each, and for the other one, this number is 4,500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Precision and AUROC metric diagrams for the OoD detection module have been shown in those two settings (during the time steps) in Figures 2 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It can be seen that the performance of the OoD detection module is improved over time due to the fact that it sees more classes and can detect class boundaries more precisely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 17 Under Review Figure 2: (left) AUROC of OoD detection based on the number of the main dataset seen classes for CIFAR100 classification with CIFAR10 dataset as peripheral when the number of related and unrelated data are 9000 (right) The precision of OoD detection at each task of CIFAR100 classification with the CIFAR10 dataset as peripheral when the number of related and unrelated data is 9000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Figure 3: (left) AUROC of OoD detection based on the number of the main dataset seen classes for CIFAR100 classification with CIFAR10 dataset as peripheral when the number of related and unrelated data are 4500 (right) The precision of OoD detection at each task of CIFAR100 classification with the CIFAR10 dataset as peripheral when the number of related and unrelated data is 4500 D OTHER BENCHMARKS In the "Other Benchmarks" section of the paper,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' we investigated different configurations to demon- strate the effectiveness and robustness of the URSL model in dealing with various conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' All benchmarks are clarified, and the results are analyzed further below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' After and Before In these two benchmarks, we assume that there are no unrelated samples among unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The difference between these two settings is the presence of classes from the main dataset in the unlabeled data at any time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' More specifically, in the After scenario, unrelated samples are only from classes of the current time step and future classes from subsequent time steps are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, in the Before scenario, the unlabeled data from only previous classes are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This experiment was designed to show that the URSL model can benefit from a positive forward/backward knowledge transfer from the unsupervised samples to the supervised tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As shown in Table9, in the Before scenario, it seems that visiting unlabeled data of previous classes helps to mitigate catastrophic forgetting (positive backward transfer).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In contrast, in the After scenario, the model learns a decent representation space which is beneficial for learning the new coming classes (positive forward transfer).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Only Related and Only Unrelated To investigate the effect of each type of unlabeled data on the model’s functionality, we defined Only Related and Only Unrelated settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As their names suggest, in the former, all unlabeled data at each task only contains main dataset samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In contrast, in the latter, all unlabeled data are only from the peripheral dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In Table 10, comparing Only Unrelated with Only Supervised shows that even unrelated samples improve performance by enriching the representation of the reference network and providing a pivot model that prevents the 18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='68 AUROC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='60 10 20 30 40 50 60 70 80 90 100 Number of Accumulated Classes0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 1 2 3 4 5 6 7 8 9 10 Task Number0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='74 AUROC 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='64 10 20 30 40 50 60 70 80 90 100 Number of Accumulated d Classes0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 1 2 3 4 5 6 7 8 9 10 Task NumberUnder Review Table 11: Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Benchmarks of CIFAR100 classification with CIFAR10 dataset as peripheral Setting Only Supervised Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (25 %) Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (50 %) OSSCL Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 Table 12: The effect of the number of related and unrelated samples on CIFAR100 classification with CIFAR10 dataset as the peripheral dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Related-Unrelated (samples) 1000-9000 4500-4500 4500-9000 9000-4500 9000-9000 Accuracy(%) 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 learner network from high accumulative changes during the continual learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Also, the Only Related scenario demonstrates the effectiveness of existing related samples among unlabeled data in performance, and when compared with the OSSCL setting, the effectiveness of unrelated samples can be understood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is worth mentioning that although most of the improvement is due to incorporating related samples, accessing pure related unlabeled data is not usually a realistic assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Instead, they are among a huge stream of unlabeled data containing unrelated samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we considered both related and unrelated datasets as unlabeled samples (in the OSSCL setting) and showed that properly employing these datasets (in URSL) further improves the results compared with the OnlyRelated case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The OSSCL scenario considers an I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' assumption on the related unsupervised samples available in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' To challenge this assumption, we introduced a new benchmark in which the related data is generated only from a portion of the supervised classes at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For example, in the Non-I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (50 %) experiment, the related unsupervised dataset, only includes the samples from half of the supervised classes, which are randomly selected at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As it is shown in Table 11, the URSL model still demonstrates a good performance even with this limited access to the related unlabeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' E NUMBER OF RELATED AND UNRELATED SAMPLES In this section, we investigated the effect of the number and ratio of the related and unrelated samples among unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In contrast to other baselines, URSL is able to utilize unrelated unlabeled samples to boost final performance even with an imbalanced number of related and unrelated unlabeled sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table 12 shows that increasing unrelated samples improves results slightly while increasing related samples provides the model with more in-distribution samples to improve its performance and combat catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' F MORE COMPLICATED ENVIRONMENTS In this section, we examine the performance of our model in even more realistic environments by conducting more experiments in scenarios in which the unlabeled data is comprised of multiple datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table 13 shows the performance of experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Besides the datasets we used in the main experiments, we also used Caltech256 (Griffin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2007) in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' At each experiment, we add 9000 samples from each dataset sampled randomly to the Tt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The results suggest that our model is robust to a variety of unlabeled data and performs well in more realistic scenarios in which the model is exposed to plenty of unlabeled samples that most of which are not related to its target tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 19 Under Review Table 13: The results of using multiple datasets in Ut to stimulate a more realistic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Dataset1 Dataset2 Dataset3 Dataset4 Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) CIFAR10 CIFAR100 ——– ——– 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 CIFAR10 CIFAR100 Tiny-Imagenet ——– 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 CIFAR10 CIFAR100 Caltech256 ——– 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3 CIFAR10 CIFAR100 Tiny-Imagenet Caltech256 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 Table 14: effect of different architectures for the reference and the learner network on CIFAR100 classification with CIFAR10 dataset as the peripheral dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Reference (#parameters) Learner (#parameters) Accuracy(%) ResNet-18 (11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1M) ResNet-18 (11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1M) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 ResNet-34 (21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2M) ResNet-18 (11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1M) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6 ResNet-18 (11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1M) WideResNet-40-2 (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='24M) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5 ResNet-50 (23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5M) WideResNet-28-10 (36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='48M) 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 G THE REFERENCE AND THE LEARNER ARCHITECTURES The authors of Co2L (Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021) used ResNet-18 as the feature extractor architecture of their model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Following this design choice, we used the same architecture for both the learner and reference networks as well as all other models and baselines in all experiments to ensure a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this section, we investigate the effect of changing the architecture for the learner and the reference networks as reported in Table 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' As expected, the model’s performance slightly increases as the number of model parameters grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Moreover, deep ResNet architectures compared with wide ResNet architectures achieved better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is noteworthy that although we used a batch size of 512 in all of our experiments in other sections, the experiments in this section are performed with a batch size of 128 to meet the memory limit requirement, in addition to providing a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' H MEMORY BUFFER SELECTION ALGORITHM Selecting the suitable samples to be stored in the memory is an active area of research in continual learning (Bang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tiwari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Isele & Cosgun, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, the purpose of our research was not to focus on memory selection policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Therefore, we have used a random policy as it is widely adopted in many CL works (Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Balaji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' It is noteworthy that the segregation of unlabeled data provides more diverse data than what exists in the memory buffer from past classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Nevertheless, because the stored samples in the memory buffer play an important role in segregating the unsupervised samples we conducted experiments using different selection algorithms for memory buffer samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In addition to the "random" selection method, we defined three other selection strategies: Low-confidence: select the data on which the model has low confidence High-confidence: select the data on which the model has high confidence Rainbow (Bang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2021): select from all the ranges of confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This algorithm calculates a confidence score for each sample and sorts all scores;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' then, it selects some data by considering the presence of samples from all ranges of model confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table 15 shows the performance of all algorithms: As can be seen, the "Random" selection algorithm outperforms both the "High-confidence" and "Low-confidence" selection strategies by a good margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Moreover, the "Rainbow" achieves similar results as the "Random" strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 20 Under Review Table 15: effect of different algorithms for data selection for memory buffer on CIFAR100 classifica- tion with CIFAR10 dataset as the peripheral dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Algorithm Low-confidence high-confidence Rainbow Random Accuracy(%) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9% 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='5±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='9% Table 16: Comparison of URSL with URSL with full-pretraining Method URSL URSL with Pretrain Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' (%) 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='7 I FURTHER EXPERIMENTS I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1 PRETRAINING THE REFERENCE NETWORK The reference network is expected to gradually absorb unsupervised knowledge from the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this section, we designed an experiment to show the success of the reference network in continually learning the unsupervised samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this experiment, the reference network is first pre-trained with all unsupervised samples before starting the learning of the first supervised task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Next, the reference network is frozen, and its parameters are maintained throughout the entire learning procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Other training details and learning mechanisms of the learner network are the same as the original URSL model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Table 16 demonstrates that pretraining only slightly improves the URSL results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This suggests that the unsupervised samples available in the environment are sufficient for the reference network to learn a proper representation even if the data is observed continually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2 SELF-SUPERVISED METHOD In whole experiments, we used NT-Xent loss (Sohn, 2016), a popular and straightforward contrastive loss, which is widely used in self-supervised learning literature (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020a) and achieved remarkable performances for training the reference network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, our model and algorithm perform well regardless of the self-supervised loss used to train the reference network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' To demonstrate it, we compared the results of our model with experiments in which the reference network is trained by BYOL (Grill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=', 2020), a different self-supervised algorithm from NT-Xent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Tables 17 and 18 report the results of runs for two different scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The advantage of BYOL over SimCLR is that it does not need negative data in training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Indeed, in our experiments, datasets have less diverse samples than huge datasets such as ImageNet;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' therefore, we expect BYOL to perform better than SimCLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' The empirical results are also a confirmation of this point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' However, BYOL needs more time to obtain comparable results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' J LIMITATIONS In our method, there exist several limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' First, we have to keep a minimum number of main dataset samples from each class in the memory buffer in order to create more precise prototypes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Second, due to the non-parallelism of the training phase of the teacher network and the student network, the time complexity of our method is higher than other methods and baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Furthermore, our model performs worse if the number of samples for each class becomes rigorously imbalanced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' There exist other limitations related to the Open-Set Semi-Supervised Continual Learning scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Although this configuration seems more realistic than the previous works in literature, there may still exist situations in which the assumptions of OSSCL do not hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' For example, an agent may have limited access to both related and unrelated unlabeled samples in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' This will 21 Under Review Table 17: Comparison between NT-Xent and BYOL performance on CIFAR10 classification with Tiny-Imagenet dataset as peripheral Main dataset Peripheral dataset SSL method Accuracy (%) Time cost (mins) CIFAR10 Tiny-Imagenet SimCLR 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='6% 136 mins CIFAR10 Tiny-Imagenet BYOL 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='0% 258 mins Table 18: Comparison between NT-Xent and BYOL performance on CIFAR100 classification with CIFAR10 dataset as peripheral Main dataset Peripheral dataset SSL method Accuracy (%) Time cost (mins) CIFAR100 CIFAR10 SimCLR 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='2% 219 mins CIFAR100 CIFAR10 BYOL 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='3±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content='8% 445 mins lead to poor performance of our model since it is designed to perform in a situation where plenty of unsupervised data exists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' K CODE AND DATA AVAILABILITY The source code to reproduce the results of this paper is attached to this document.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' In this repository, there exists a README file containing instructions and configuration details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' Moreover, the licenses of the freely available datasets and used source codes are also available in the README file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} +page_content=' 22' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/UNE3T4oBgHgl3EQfagoR/content/2301.04506v1.pdf'} diff --git a/V9E3T4oBgHgl3EQf0gvl/vector_store/index.pkl b/V9E3T4oBgHgl3EQf0gvl/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a32400524ff49393c5912e1fa2c11081e09a5d20 --- /dev/null +++ b/V9E3T4oBgHgl3EQf0gvl/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5a8cdb9f2d4fc5d17568ee3a6799c8bfb226ac8c0eda2cc56e8872b8695850e7 +size 106577 diff --git a/VtE4T4oBgHgl3EQfng0j/content/tmp_files/2301.05176v1.pdf.txt b/VtE4T4oBgHgl3EQfng0j/content/tmp_files/2301.05176v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..586eab4f2dd2bb06d38b205c0d6befa66cad5565 --- /dev/null +++ b/VtE4T4oBgHgl3EQfng0j/content/tmp_files/2301.05176v1.pdf.txt @@ -0,0 +1,1376 @@ +Workload Failure Prediction for Data Centers +Jie Li +Department of Computer Science +Texas Tech University +Lubbock, USA +jie.li@ttu.edu +Rui Wang +Department of Computer Science +Texas Tech University +Lubbock, USA +rui.wang@ttuhsc.edu +Ghazanfar Ali +Department of Computer Science +Texas Tech University +Lubbock, USA +Ghazanfar.Ali@ttu.edu +Tommy Dang +Department of Computer Science +Texas Tech University +Lubbock, USA +tommy.dang@ttu.edu +Alan Sill +High-Performance Computing Center +Texas Tech University +Lubbock, USA +alan.sill@ttu.edu +Yong Chen +Department of Computer Science +Texas Tech University +Lubbock, USA +yong.chen@ttu.edu +Abstract—Failed workloads that consumed significant com- +putational resources in time and space affect the efficiency of +data centers significantly and thus limit the amount of scientific +work that can be achieved. While the computational power has +increased significantly over the years, detection and prediction +of workload failures have lagged far behind and will become +increasingly critical as the system scale and complexity further +increase. In this study, we analyze workload traces collected +from a production cluster and train machine learning models +on a large amount of data sets to predict workload failures. Our +prediction models consist of a queue-time model that estimates +the probability of workload failures before execution and a +runtime model that predicts failures at runtime. Evaluation +results show that the queue-time model and runtime model can +predict workload failures with a maximum precision score of +90.61% and 97.75%, respectively. By integrating the runtime +model with the job scheduler, it helps reduce CPU time, memory +usage by up to 16.7% and 14.53%, respectively. +Index Terms—Data Center, Failure Prediction, Predictive An- +alytic, Big Data, Machine Learning +I. INTRODUCTION +The scale and complexity of many data centers have sig- +nificantly increased over the years. Meantime, the demand +from user community for computational and storage capability +has considerably increased too. This combination of increased +scale of data centers and size of workloads with different +requirements and characteristics has resulted in growing node +and workload failures, posing a threat to the reliability, avail- +ability, and scalability (RAS) of data centers. For example, all +else being equal, a system that is 1,000 times more powerful +will have at least 1,000 times more components and will fail +1,000 times more often [1], resulting in a long-running job +utilizing a large amount of nodes being terminated due to +frequent failures. Therefore, over the past decades, various +methods and algorithms were proposed to improve the system +resilience and efficiency [2]–[7]. +Reactive strategies, such as Checkpoint/Restart (C/R) [4], +[7], are conventional approaches for fault tolerance. As an +example, a reactive fault tolerance strategy for a node failure +is to reschedule a workload to a new node and restart from a +specific checkpoint. However, checkpointing a job in a large- +scale system could incur large I/O overhead when writing and +reading workloads state [8], and takes an overhead of more +than 15% of the total execution time [9], [10], which signif- +icantly impedes science productivity. As a result, researchers +on failure management have found that prevention is better +than cure and shifted to proactive management strategies [7], +[11]–[15]. In contrast to reactive strategies, proactive strategies +develop models based on the failure data in data centers to +predict node or workload failures in the near future and take +preventive measures to improve the RAS of data centers. +Numerous research efforts have developed node failure +detection and prediction methods by utilizing temporal and/or +spatial correlations of failures [16]–[19]. They usually investi- +gate system behavior via Syslog analysis and have developed +supervised and unsupervised approaches for predicting failures +in data centers. A number of studies have attempted on +workload-centric failure detection and prediction based on the +resource usage or requested resources [20]–[23]. However, +only limited amount of workload data is publicly available +due to confidentiality or other reasons. In addition, analyzing +and extracting insightful knowledge from massive amounts of +data is daunting, given the increasing scale and complexity of +data sets. +This research aims at using machine learning-based ap- +proach to predict workload failures in data centers. In partic- +ular, we investigate two months of workload traces collected +from a production cluster in order to find correlation between +workload attributes with exit status (including error status). +We seek to train supervised learning models to predict: (1) +the failure probability of a workload at queue time, and (2) +the likelihood of failure over the life-span of a workload. +Having the knowledge of whether jobs will likely to fail +or not can be valuable for both users to be alerted of the +potential failures and the resource manager (both the soft- +ware and system administrators) to be proactive in preventing +wasting computational resources. Consequently, the RAS and +productivity of data centers can be improved in return by better +arXiv:2301.05176v1 [cs.DC] 12 Jan 2023 + +managing the workloads that are likely to fail. +We make the following contributions in this study: +• We analyze workload traces collected from a production +data center and perform an extensive characterization +study of workload failure rates across nodes, users and +different time scales. We investigate the correlation be- +tween workload characteristics and failures, and identify +the relevant factors that lead to failures. +• We apply several machine learning algorithms on our +data set and train two prediction models: a Queue-time +model and a Runtime model. We find that Random Forest +achieved the best prediction performance in terms of +Precision and F1 scores in both models. Experimental +results show that these two models predict workload fail- +ures with a maximum Precision of 90.61% and 97.75%, +respectively. +• We quantify the resource savings achieved by applying +the runtime prediction model on workloads at different +times. Based on the prediction results, proactive failure +management (e.g., killing workloads that are predicted +to fail) can achieve CPU and memory savings by up to +16.7% and 14.53%, respectively. +• We investigate the effects of training data set size to find +the optimum size that can achieve acceptable prediction +performance with minimum training time. +The rest of this paper is organized as follows. Section II +describes background of this research, including the moni- +toring infrastructure, data points, and source of anomalies. +In Section III, we analyze the workload data. Section IV +describes the machine learning algorithms we have investi- +gated in this research and explains our methodology. The +experimental results are presented in Section V. Section VI +provides an overview of related work, and we conclude this +research in Section VII. +II. BACKGROUND +This research study is conducted on a production data center +called Quanah, where scientists from all major scientific fields, +such as astrophysics, computational chemistry, bioinformatics, +etc., perform simulations and scientific computations. In our +previous work [24] , we have designed and implemented a +monitoring, data collection and management infrastructure to +gather workload and node metrics from the cluster in real time. +The Quanah cluster and monitoring framework are described +in the next section, followed by the analysis of sources of +failures in common data centers. +A. Quanah Cluster +The Quanah cluster at High Performance Computing Center +(HPCC) of Texas Tech University [25] is commissioned in +early 2017 and expanded to its current size in 2019, which is +comprised of 467 nodes with Intel XEON processors providing +36 cores per node. Quanah has a total of 16,812 cores with +a benchmarked total computing power of 485 Teraflops/s +and provides 2.5 petabytes storage capability. The cluster is +based on Dell EMC PowerEdge™ C6320 servers, which are +Fig. 1: Sources of Failures in Data Centers +equipped with the integrated Dell Remote Access Controller +(iDRAC) [26] providing Redfish API [27] for accessing Base- +board Management Controller (BMC). The software environ- +ment is based on CentOS 7 Linux, provisioned and managed +by OpenHPC, and the cluster is operated with Univa Grid +Engine (UGE) [28], setting up with multiple queues, with jobs +sorted by projects to meet the needs of research activities for +many fields and disciplines. +B. Monitoring Infrastructure +The monitoring data in Quanah cluster is obtained through +an “out-of-the-box” monitoring tool [24] that utilizes the +Redfish API to retrieve sensor data from the BMC on each +compute node and the resource manager (such as UGE and +Slurm) for workload information and resource usage data. +Sensor metrics and resource usage data are collected at node +level in 60-second intervals, which include power usage, fan +speed, CPU usage, memory usage and node-job correlations, +etc. The time-series data is stored in a time-series database +(e.g. InfluxDB). Workload information is derived from the +UGE accounting data, which includes job submission time, +start time, end time, total CPU time, integral memory usage, +IO operations, etc. The workload data is stored in a MySQL +database. With several performance optimizations, such as op- +timized database schema, using high-speed storage, concurrent +processing and transmitting compressed data, our infrastruc- +ture provides near real-time analysis and visualization of user- +level and node-level status. +C. Sources of Failures +In a data center cluster where computing resources are +shared by different workloads submitted by users from various +domains, the number of failed jobs (i.e. workloads) can be +large. There are three main reasons. First, domain scientists, +while skilled in their scientific fields, do not always have +sufficient experience and background in computing, especially +in large-scale, parallel computing. Second, diverse workloads +depend on different libraries, and bugs and missing updates +in dependent libraries can lead to unexpected failures. Third, +data center is complex, and any mis-configuration or hardware +errors can cause workload termination. +Figure 1 summarizes the high-level root cause categories in +data centers, where failures are attributed to user, application, +system or hardware problems. +• Users: insufficient resource request or wrong input in the +job submission script can cause workloads to fail [29]. + +User +Application +Wrong Input +BUg/ +Domain-specificLibraries +JobSubmissionScript +LinearAlgebra Libraries +ParallelismRuntime +Failures +System +Hardware +Faulto/F@uno +Ermors +OperatingSystem +CPU, GPU, Memory, +Mlcomtouretong +File System +Disk, NIC etc.TABLE I: Features of Workloads +Feature +Type +Description +job id +Numeric +Job identifier +owner +Categorical +Owner of the job +group +Categorical +The group id of the job owner +job name +Categorical +Job name +granted pe +Categorical +The parallel environment +hostname +Categorical +Name of the execution host +submission +Numeric +Submission time (in epoch time format) +start time +Numeric +Start time (in epoch time format) +end time +Numeric +End time (in epoch time format) +wallclock +Numeric +Difference between end and start time +cpu +Numeric +The CPU time usage in seconds +mem +Numeric +The integral memory usage in Gbytes1 +io +Numeric +The amount of data transferred in Gbytes +iow +Numeric +The io wait time in seconds +maxvmem +Numeric +The maximum vmem size in bytes +slots +Numeric +The number of parallel processes +wait time +Numeric +Difference between start and submit time +exit status +Numeric +Exit status of the job script +1 The sum of the amount of memory used in each time interval +for the life of the job. +TABLE II: Exit Status Summary for Failed Workloads +Exit Code +Meaning +Number +Percentage +1 +Miscellaneous errors +21367 +58.16% +2 +Missing keyword or command +3032 +8.25% +7 +Argument list too long +6549 +17.83% +127 +Command not found +528 +1.44% +137 +(System signal 9) Kill +190 +0.52% +255 +Exit status out of range +4598 +12.52% +Others +475 +1.28% +Note that user interruptions, such as issuing a command +like “scancel”, can interrupt a running workload and +cause a failure too. However, since the effect of cancelling +a running workload is obvious to the user, we do not +categorize it as a source of failures. +• Applications: mis-configured applications increase the +risk of poor performance. In addition, buggy codes, +missing dependent library updates, and/or bugs can cause +applications to terminate unexpectedly too. +• Systems: mis-configurations or failures of system re- +sources and components may considerably affect the +performance of workloads or cause failures. +• Hardware: hardware errors are one of the most devastat- +ing issues for data centers. Severe hardware issues can +lead to the malfunction of the entire system. Events such +as memory hardware errors, CPU overheating, etc., will +result in workload errors and crashes. +III. WORKLOAD ANALYSIS +To predict the workload failures in data centers, it is +crucial to understand the characteristics of failed workloads. +In this section, we first present an overview of the workload +trace, then quantitatively analyze the percentage of failures in +the workloads and the computational resource consumption +characteristics. After that, we further study the failure rate +across the nodes, users, and different time scales. +Fig. 2: Proportion and Resource Consumption Characteristics +of workloads. Red indicates failed workloads and dark blue +indicates successful and cancelled workloads (non-failures). +A. Workload Overview +The workload trace is derived from job accounting data +collected from the Quanah cluster for the period of August 1, +2020 to October 1, 2020, which contains 324,358 instances +submitted by 204 unique users (i.e. owners). Notice that +workloads that cannot be started on the execution host (e.g., +because the user does not have a valid account on that +node [30]) are recorded in the raw job accounting data, and +we drop these entries because they are killed by the job +scheduler for reasons that are not part of the sources of +failures summarized above. Additionally, they do not consume +compute resources. Table I lists 18 selected features. These +features can be categorized into two groups. The first group +includes categorical features, such as owner, group, and job +name. The other group includes numeric features, such as CPU +time usage and integral memory usage. +When a batch job exits, the scheduler (in our case UGE) +generates an exit_status field in the job accounting data. +According to the UGE documentation, a general exit status +convention is defined as follows. An exit status of 0 indicates +a successful workload. If the command terminates normally, +the exit status is the value of the command in the job script, +which is in line with normal shell conventions. In the case of +the script command exits abnormally, a value of 128 is added +to the value of the command. Thus, the exit status ≥ 128 can +be decomposed into 128+a system signal, where the system +signal value can be a fatal error signal such as 6 (SIGABRT), +9 (SIGKILL). +We summarize the exit status that indicates failure in +Table II. We find that the most common exit status was +1 (58.16%), indicating that there are miscellaneous errors +causing the failures. The next most significant exit status is +7 (17.83%), which occurs anytime a user feeds too many +arguments in the job submission script. We also notice that +there are 190 (0.52%) workloads that are killed by users +through system signal 9. Since we do not consider cancelled +workloads as failures, we drop these 190 workloads with +exit status 137. In addition, we do not intent to predict +exact errors, so we convert all non-zero exit status to 1, +representing workloads that face problems during run time. +We use exit status to distinguish workloads that had completed +successfully (exit status as 0). + +21.1% +20.2% +8.5% +91.5% +79.8% +78.9% +In Quantity +In CPU Time +In Integral Memory Usage(a) +(b) +Fig. 3: Number and percentage of workload failures per node +distributed by node ID (a) and by physical location (b). In +sub-figure (a), red indicates failed workloads and dark blue +indicates successful workloads. In sub-figure (b), the darkness +of the color represents the workload failure rate (in other +words, the darker the color means the higher the workload +failure rate. +B. Proportion of workload failures +Figure 2 presents the proportion and resource consumption +characteristics of workload failures. As shown in the figure, +workload failure rate is 8.5% for all submitted jobs (includ- +ing successful and failed workloads) in quantity. We further +analyze the CPU time consumed by failed workloads and find +that failed workloads cost 21.1% of the total CPU time. The +proportion of CPU time for failed workloads is larger than the +proportion of the number of failed workloads, indicating that +the more processors a workload uses and the longer it runs, the +higher the probability that this workload will fail. Additionally, +we quantify the integral memory usage consumed by failed +workloads. As shown in Figure 2, the wasted memory resource +rate is 20.2%. All these statistics imply that failed workloads +waste significant amount of computational resources and there- +fore degrade the system efficiency. +C. Distribution by nodes +We depict the distribution of failures across the nodes of +the system by node ID and physical locations in Figure 3a +(a) +(b) +Fig. 4: Number and percentage of workload failures distributed +by user ID (a) and by wallclock (b). +and Figure 3b, respectively. Figure 3a shows the total number +and percentage of workload failures for each node. We first +observe that about 20 nodes serve a relatively large number of +workloads than the other nodes, while some nodes have more +than 60% of workload failures. It is worth noting that the nodes +serving a large number of workloads do not necessarily have a +higher percentage of workload failures. A possible explanation +is that nodes with high workload failure rate may have node- +specific (hardware or operating system) vulnerabilities. +These 467 nodes in Quanah cluster are hosted in 10 racks +and each node can be uniquely addressed by rack and chassis +number. Each column shown in Figure 3b represents one rack +of nodes and each row represents one chassis. From Figure 3b, +we observe that nodes in rack 1, 3 and 7 have relatively +high workload failure rate. Since the power, temperature and +connectivity of all nodes located in a rack are controlled +together, problems in these areas can cause failures to occur +in physical location vicinity [19]. +D. Distribution by users +To find out the correlation between users and failed work- +loads, we plot the workload distribution by user ID, as shown +in Figure 4a. The total number of jobs submitted by users + +104 + of Jobs +103 +# +102 +100 +Percentage(%) +80 +60 +40 +20 +0 +0 +100 +200 +300 +400 +Node iD100% +0 +5 +10 +15 +75% +20 +S +25 +aSsi +50% +30 +a +h +C +35 +40 +25% +45 +50 +55 +0% +T +1 +1 +1 +1 +T +1 +1 +2 +3 +4 +7 +10 +5 +6 +8 +9 +Rack of Jobs +10 +102 +# +100 +100 +Percentage(%) +80 +60 +40 +20 +0 +25 +50 +75 +125 +100 +150 +175 +200 +0 +User ID# of Jobs +Failure +10 +10 +0 +100 +Percentage(%) +80 +60 +40 +20 +0 +0 +36000 +72000 +108000 +144000 +180000 +Wallclock (seconds)(a) +(b) +Fig. 5: Number and percentage of workload failures distributed +by hour of the day (a) and by the day of the week (b). +ranges from a few to over 10,000. We observe that the +failure rate of workloads per user varies significantly and +those users who submit a small number of workloads have a +large portion of failed workloads. These statistics suggest that +users’ experience in properly configuring their applications +and/or requesting computational resources varies widely and +that these inexperienced users contribute a large fraction of +failed workloads. +E. Distribution by time +The wallclock is the actual time taken from the start to the +end of a workload. On Quanah cluster, if the user does not +specify a runtime in the script, the job scheduler has a default +runtime limit of 172,800 seconds (48 hours) for each submitted +job. We illustrate the distribution of workloads by wallclock in +Figure 4b. In general, the number of workloads decreases as +the wallclock increases, and there is no significant correlation +between the failure rate of workloads and the number of +workloads. However, we observe a reverse correlation between +24,000s and 84,000s: high workload intensities are associated +with low failure rates and vice versa. In addition, the failure +rate appears to be high around 144,000s as well. +Fig. 6: Workflow of Predicting Workload Failures in Data +Centers +It is commonly known that the usage pattern of data centers +fluctuates with time [31]. Figure 5 categorizes the workload +failures by the hour of a day and by the day of a week. +We observe that there is a slight correlation between failure +rate and time. During the time when the highest and the +lowest number of jobs occur, the failure rate is below 5%. +However, during the rest of time, the failure rate is between +10% and 22%. A possible explanation for this observation is +that experienced users submit large array jobs that contribute +to the majority of the workloads. Furthermore, their code is +robust and their applications are well configured, resulting in +low failure rates. When the workload intensity is low, the +system is less vulnerable. The failure rate for each day of +a week shows similar results: the highest workload volumes +resulted in the lowest failure rate. However, during the rest +of days of a week, the workload failure rate does not change +much. +IV. METHODOLOGY +In this section, we describe the workflow of predicting +workload failures in data centers. As shown in Figure 6, +the workflow consists of four phases: (1) data collection: +collecting metrics from data centers; (2) data preparation: +preprocessing the data into a structured format and extracting +features for machine learning models; (3) model training: +training Queue-time and Runtime models using machine learn- +ing algorithms; (4) remediation management: applying remedi- +ation management techniques to leverage the prediction results +to optimize the management of data centers. Data collection +has already been discussed in Section II-B. Therefore, we +focus on data preparation and model training in this research. +We leave the failure remediation management and data center +optimizations as near-future research work. +A. Data Preparation +1) Preprocessing: In our current design and implementa- +tion, the job accounting data is stored in a MySQL database; +we perform a select operation with start and end times to +select the data collected from August 1th, 2020 to October +1th, 2020. The data is then saved into a dataframe. In the data +preprocessing phase, we convert raw features into a format +more suitable for machine learning training. Specifically, we +create dummy variables for the categorical variables by using +one-hot encoding [32], and scale the numerical features to +avoid features with high variability from having more influence +in the prediction. + +Success +100 k +of Jobs +Failure +75 k +50 k +# +25 k +0 +100 +Percentage(%) +80 +60 +40 +20 +0 +10 +15 +5 +20 +0 +Hour of DaySuccess +150 k + Jobs +Failure +100 k +# +50 k +0 +100 +80 +60 +40 +20 +0 +Sun +Thur +Sat +Mon +Wed +Tue +Day of WeekE.g., notification of +QueueTimeModel +Potential Failures +Data Collection +Data Preparation +Runtime Model +E.g., preactively killing +predicted failures +Failure Prediction +Failure Remediation +Models Training +ManagementAnother important design consideration in data preparation +is to deduct irrelevant attributes and derive appropriate features +from the original attributes. Irrelevant attributes add extra +dimensions to the data set and can distract machine learning +algorithms from achieving accurate prediction rules. On the +other hand, deriving proper combinational features can boost +the prediction accuracy. In our case, we drop the feature job id +because it is assigned by the job scheduler and does not reveal +the characteristics of the workload. We derive hours of the day +and days of the week from time-related features to augment the +data. We also derive several numeric features from the resource +usage data, such as CPU intensity and average memory usage. +CPU intensity is defined as (cpu/slots)/wallclock, i.e., the +ratio of the CPU time of a workload’s single processor to its +overall wallclock (i.e. runtime). The average memory usage +is defined as mem/wallclock, i.e., the ratio of the integral +memory usage of a workload to its run time. +In addition, the collected data does not contain information +about applications and libraries. To overcome this limitation, +we apply Natural Language Processing (NLP) techniques to +job names and identify similar job names submitted by the +same user. We then assign a uniform name to these workloads +as their job names. This process aims to categorize workloads +that use the same libraries. An underline hypothesis of this +process is that workloads with similar job names submitted +by the same user tend to be the same applications and use +the same libraries, differing only in parameters or parts of the +code. +As discussed in Section III-A, we do not intend to predict +the exact workload error, therefore the prediction is a binary +classification problem (i.e., success or failure). We convert +exit status to 1 if the workload fails and 0 otherwise, and +use this as the class label. +2) Features Selection: Predicting workload failures pro- +vides input for failure remediation management, where possi- +ble techniques include: 1) notifying users of potential work- +load failures after job submission but before execution; 2) +making better scheduling decisions based on the prediction, +thereby encouraging users to improve code quality and request +appropriate computational resources; and 3) killing workloads +that will fail before wasting too many computational resources. +To this end, we plan to train two models, one for predicting +pre-run failures (i.e., queue-time model) and the other for +predicting runtime failures (i.e., runtime model). +Queue-time model and runtime model are trained with +features available in different job states. The queue-time model +is trained with categorical features such as owner, group, +job name, department, etc. The runtime model uses not only +categorical features but also resource usage features, such as +CPU time, integral memory usage, data transferred in IO, etc. +Note that in our case, resource request data, such as estimated +job running time and expected maximum memory needed +during runtime, are not recorded in the job accounting data, +slightly limiting the available features that can be used to train +the queue-time model. For other data sets that include resource +request information, the queue-time model can be enhanced +and its prediction results should be more accurate. +B. Model Training +We use five classification algorithms to implement machine +learning models and train them with our 324,358 instances +to predict workload failures. These algorithms and the corre- +sponding hyper-parameters are described below. +Gaussian Naive Bayes: Naive Bayes is a probabilistic +machine learning algorithm based on the Bayes Theorem, +which is a simple mathematical formula for calculating con- +ditional probabilities. In our implementation, we use Gaussian +Naive Bayes (GNB) (i.e., Naive Bayes extended to real-valued +attributes). It is easy to implement because we only need +to estimate the mean and standard deviations of the training +data. Guassian NB does not accept parameters, except for the +priors parameter, which we use the default value of “None” +in our model. +Logistic Regression: Logistic Regression (LR) is a classifi- +cation algorithm for finding the relationship between features +and outcome probabilities and is the most widely used machine +learning algorithm in classification problems. It is relatively +fast compared to other supervised classification techniques. +Since we do not predict the exact value of the exit status, we +use Binomial Logistic Regression. Logistic Regression does +not actually have any critical hyper-parameters to tune. We +set the inverse regularization parameter (i.e. C) to 0.1 and +choose “l2” as the penalty parameter and “liblinear” as the +solver parameter. +Linear Discriminant Analysis: Linear Discriminant Anal- +ysis (LDA), as the name implies, is most commonly used +as a dimensionality reduction technique, but it can also be +used as a classification tool by finding linear combinations +of features that separate two or more classes. LDA works +by calculating summary statistics, such as mean and standard +deviation, of input features by class label. Predictions are +performed by estimating the probability that a new instance +belongs to each class label based on the values of each feature. +We set the solver to “lsqr”, which performs best in our data +set compared to other built-in solvers. +Decision Tree: Decision Tree (DT) is a predictive model +that predicts value by learning decision rules inferred from +data features. One of the advantages of this algorithm is that +the non-linear relationship between features does not affect +the performance of the tree. It can handle both categorical +and numeric data. The criterion parameter in the DT is +set to “gini” and the splitting parameter is set to “best”. +All other parameters are kept as default. +Random Forest: Random Forest (RF) is an ensemble +method that consists of a large number of individual decision +trees. It uses bagging and feature randomness in the construc- +tion of each tree to create a forest of uncorrelated trees. Each +individual tree in the random forest produces a class prediction +and the class with the most votes will be the predicted value of +the model. As with the Decision Tree, we set the criterion +parameter to “gini” instead of “entrophy”. The number of +random features (i.e. max_features) considered in each + +split is set to “sqrt”, which is usually good for classification +problems. The rest of the parameters are left unchanged. +Since our data set is very large, we use the holdout method +instead of the cross-validation method to save computational +cost. The data set is partitioned into 65% training data, 15% +validation data and 20% testing data. The training set learns +the relationship between the features and the target variables +(i.e. 0 for success and 1 for failure). The validation set is used +to check how accurately the model defines the relationship be- +tween features and known outcomes. The testing data provides +a final estimate of the model performance after the model has +been trained and validated. +V. EXPERIMENTAL RESULTS +In this section, we describe the evaluation metrics we +used for our experiments and present the experimental results +including the performance of the machine learning algorithms +described above and the potential resource savings that benefit +from the prediction, followed by an evaluation of the impact +of the training sizes. The models in this study are implemented +in the scikit-learn [33] Python library. +A. Evaluation Metrics +1) Prediction Metrics: In order to measure the performance +of ML algorithms, it is important to specify evaluation metrics. +We use recall (i.e., true positive rate), precision and F1 Score +as our measurements. Recall represents the ratio between the +number of correctly predicted failed workloads to the total +number of actual failures. Precision is calculated by dividing +the total number of predicted failures with the number of +correctly predicted failed workloads. F1 score is the weighted +average of recall and precision. A higher score for these three +metrics means that the model’s classification results are more +accurate. These measurements are shown below: +recall = # of Correctly Predicted Failures +Total # of Actual Failures +(1) +precision = # of Correctly Predicted Failures +Total # of Predicted Failures +(2) +F1 Score = 2 ∗ (recall ∗ precision) +recall + precision +(3) +2) Resource Savings Metrics: The basic proactive failure +remediation management is to simply kill workloads that are +predicted to fail. This strategy is sensitive to false positive, +where workloads are incorrectly predicted to fail. Killing +workloads inappropriately will result in wasted resources, as +the killed workloads will be restarted and run at a later time. +Therefore, We define the resource saving (Rsaving) as: +Rsaving = Rs − Rw +Rtotal +, +(4) +where Rtotal is the total resources consumed by failed and +successful workloads, Rs is the resource saved by proactively +killing failed workloads, and Rw is the resource wasted by +killing successful workloads. +B. Failure Prediction +Table III presents the performance of the queue-time model. +Specifically, we observe that Gaussian Naive Bayes (GNB) +achieves the highest recall score of 99.44%; Random Forest +(RF) performs the best with a precision score of 90.61% +and an F1 score of 87.71%. The performance of the runtime +model are shown in Table IV. Again, RF achieves the best +performance with a precision of 97.75% and an F1 score +of 95.91%. Although GNB achieves the highest recall score, +its precision score is the lowest, indicating a low number of +successful failure predictions in its total failure predictions +and it predicts most successful workloads as failures. When +evaluating the overall performance, we choose RF as the +classification algorithm for both models. +TABLE III: Performance of Queue-Time Model +Model +recall +precision +F1 Score +Training Time(s) +GNB +99.44 +15.65 +27.04 +3.5 +LR +57.22 +86.16 +68.77 +5.46 +LDA +62.14 +77.92 +69.14 +45.12 +DT +84.02 +90.33 +87.06 +123.96 +RF +85.00 +90.61 +87.71 +149.81 +TABLE IV: Performance of Runtime Model +Model +recall +precision +F1 Score +Training Time(s) +GNB +99.44 +15.65 +27.05 +5.13 +LR +58.13 +86.48 +69.52 +6.32 +LDA +58.92 +81.64 +68.45 +46.7 +DT +93.92 +94.57 +94.24 +173.08 +RF +94.14 +97.75 +95.91 +145.71 +C. Resource Savings +The numeric features such as CPU time and integral mem- +ory usage are only known after the workload has completed +execution. This fact may raise the question of how we can +estimate resource savings at different run times of a workload +and use these features to predict workload failures early. To +apply the run time model on a running workload, we make the +following assumption: resource usage is linearly proportional +to run time, so its resource usage at different times can be +calculated as: +CRU = FRU ∗ +Time +Wallclock , +(5) +where CRU stands for Current Resource Usage and FRU +stands for Final Resource Usage. Based on this formula, we +generate a series of test data sets from the original test data +(20% of 2-month workload traces in Quanah cluster, i.e., 12- +day workload traces) containing synthetic resource usage at +different times, and then, apply the runtime model on these +data sets. Figure 7a shows the resource savings. From this +figure, we observe the same pattern of savings in CPU time and +integral memory usage; they both achieve the highest savings +at the beginning of the time, 16.7% and 14.53%, respectively. +Overall, the resource savings decrease over time, except for a +few ups and downs at around 4200s and 14400s. To understand + +(a) +(b) +Fig. 7: Resource Savings in percentage (a) and CPU Time +Savings in Node Days (b) at different times. +the resource savings, we convert the CPU time savings in +seconds to CPU time savings in node· days, where the node +has 36 CPUs. As shown in Figure 7b, the maximum CPU time +savings is about 250 node· days. In other words, applying the +runtime model on 12-day workloads of a 467-node cluster will +help save the CPU time (and associated power consumption) +of a node running for 250 days. +To better understand the resource savings, we plot the +number of workloads and prediction performance at different +times, as shown in Figure 8a and Figure 8b. Figure 8a shows +that the total number of workloads participating in the failure +remediation management decreases exponentially and some +workloads complete before the runtime model is applied. +Therefore, the resource savings that can be achieved decrease +with time. Figure 8b presents the recall, precision, and F1 +scores. Throughout time, the precision scores are at high +values. The recall and F1 scores are decent (both scores are +above 72%) although some fluctuations exist. +D. Effect of Training Size +Even though we achieve a promising prediction accuracy +using Random Forest, the training time is long because of +using a large amount of workload traces. As shown in Table III +and Table IV, the training time in both models are about 150s. +(a) +(b) +Fig. 8: Number of workloads (a) and Prediction Performance +(b) at different times. +In order to find the optimum training size that achieves a +balance between prediction accuracy and training time, we +build the prediction models using different training sizes in +the range of 1 day to 60 days of data. As shown in Figure 9a +and Figure 9b, the precision, recall and F1 scores of the queue- +time model and the runtime model do not improve significantly +after the training size exceeding 30 days of data. With the +training size of 30 days of data, the training time shortens +to 67 seconds, which is acceptable to many data centers to +conduct workload failure predictions periodically. +VI. RELATED WORK +Characterizing and quantifying failures in data centers are +invaluable for system administrators to understand the behav- +ior of the systems and thus develop strategies to improve +the RAS of the systems. Many prior works have investigated +failures on large-scale systems [16], [17], [19], [20], [31], +[34], [35]. For example, Fadishei et al. [20] analyzed workload +traces in grid environment and discovered correlations between +failure characteristics and performance metrics. Schroeder et +al. [31] examined statistics on failure data collected at two +large HPC sites and discovered temporal and spatial correla- +tions of failures. Zheng et al. [34] presented a co-analysis of +RAS and job logs that helps in understanding failure patterns + +CPUTime +16.00 +Integral Memory Usage +14.00 +12.00 +10.00 +8.00 +6.00 +4.00 +2.00 +0.00 +600 +9600 +45600 +54600 +18600 +27600 +36600 +Time (Seconds)250 +CPUiTime +200 +150 +100 +50 +600 +9600 +18600 27600 36600 45600 5 +54600 +Time (Seconds)Total Workloads +25000 +True Predicted Failures +False Predicted Failures +# of workloads +20000 +15000 +10000 +5000 +0 +600 +9600 +18600 27600 36600 45600 54600 +Time (Seconds)100.00 +95.00 +%) +90.00 +Performance ( +85.00 +80.00 +Recall +Precision +75.00 +F1 Score +70.00 +600 +9600 +18600 +27600 +36600 +45600 +54600 +Time (Seconds)(a) +(b) +Fig. 9: Training Size vs. Prediction Performance of Queue- +time Model (a) and Runtime Model (b). +and system/user behavior. There are also studies that looked +specifically into the reliability of particular component such +as DARMs, disks and GPUs [36]–[38]. +Considering the failure characteristics and the correlations +between failures and job types, performance metrics and com- +ponents, several studies investigated machine learning models +to predict failures on large-scale systems [21], [22], [39], [40]. +Fu et al. [39] proposed a hybrid failure detection framework +using one-class and two-class support vector machines (SVM). +Chen et al. [21] proposed a prediction method based on +Recurrent Neural Network (RNN) that predicts application +failure in cloud using the Google cluster workload traces. +Tariqul et al. [22] developed a similar approach like Chen’s +by using Long Short-Term Memory Network (LSTM). Many +of the proposed approaches are limited to certain performance +metrics, such as studies based on Google cluster workload +traces [21], [22], or are limited to certain components of the +system, such as studies focused on GPUs [40]. +The drawback of these mentioned approaches is that they +ignore the human factors that lead to failures. As shown in +Section III-D and Section III-E, there are correlations between +failures and user behavior. A well-trained and experienced +user can potentially produce less failure jobs. The proposed +approach in this work considers not only performance metrics, +but also user behavior in the prediction models. In addition, +the proposed approach does not rely on complex system +logs collection and analysis; it utilizes job accounting data +that is available in all resource managers. Therefore, the +prediction models and failure remediation mechanisms (e.g. +killing predicted failures) are easier to integrate into resource +managers. +VII. CONCLUSIONS AND FUTURE WORK +In this study, we have analyzed two months of job account- +ing data collected from a production data center and found +that failed workloads accounted for 8.5% of total workloads, +consumed 21.1% of the total CPU time and 20.2% of the +integral memory usage. In addition, we have quantified the +workload failure rates across nodes, users, and different time +scales, and we have analyzed the correlation between them. +Based on the comprehensive understanding of workload traces, +we develop two prediction models (queue-time model and run- +time model) with five machine learning algorithms and have +found that Random Forest performed the best with 90.61% +and 97.75% precision scores, respectively. We further explored +the training size and its impact on prediction performance +and training time, and we concluded that 30 days of job +data is the optimal training size, with 67 seconds of training +time for our data sets. Our experimental results show that the +workload failure prediction model can help save CPU time +and integrated memory usage by up to 16.7% and 14.53%, +respectively. +Nevertheless, our study can be further improved in several +aspects. First, due to the lack of resource usage data for work- +loads at different runtimes, we had to create synthetic data to +quantify the resource savings gained from the runtime model. +This approach may not be representative of all situations and +the accuracy of the predictions may not be as high as expected. +Second, because resource request information is an important +factor in predicting workload failure, the lack of this feature +prevented our model from achieving more accurate predic- +tions. Third, the prediction models only predict the probability +of workload failure. Even though we have achieved promising +performance, we cannot infer the causality of workload failure +based on the available data since correlation does not imply +causality. To further support causality identification, we plan +to develop a provenance based approach for failure predictions +in the future. +In large-scale data centers, where workload failures become +the norm, proactive failure management is critical to improve +system reliability, availability, and scalability. In future work, +we plan to improve the prediction by adding more features +in the training data, such as hardware monitoring metrics and +system logs, and explore other machine learning algorithms, +such as LSTM. In addition, understanding the causality of +workload failures is important for both system administrators +and users. We hope to conduct causal inference studies when +the detailed provenance is available. Moreover, failure-aware +resource scheduling is also a promising research direction and +deserves further studies. + +140 +95.00 +Training Time (Seconds) +120 +90.00 +100 +80 +85.00 +60 +Recall +80.00 +40 +Precision +75.00 +F1-Score +20 +Trainingi Time +0 +70.00 +0 +5 +10 +15 +20 25 3035 40 +45 +50 +55 +60 +Training Size (Days)140 +95.00 +Training Time (Seconds) +120 +Performance (%) +90.00 +100 +80 +85.00 +60 +Recall +40 +80.00 +Precision +F1 Score +20 +TrainingiTime +75.00 +0 +0 +5 +10 +15 +2025 30 35 40 +¥45 +50 +55 +60 +Training Size (Days)REFERENCES +[1] F. Cappello, G. Al, W. Gropp, S. Kale, B. Kramer, and M. Snir, +“Toward exascale resilience: 2014 update,” Supercomputing Frontiers +and Innovations: an International Journal, vol. 1, no. 1, pp. 5–28, 2014. +[2] G. Candea, A. B. Brown, A. Fox, and D. Patterson, “Recovery-oriented +computing: Building multitier dependability,” Computer, vol. 37, no. 11, +pp. 60–67, 2004. +[3] G. Candea, S. Kawamoto, Y. Fujiki, G. Friedman, and A. Fox, +“Microreboot–a +technique +for +cheap +recovery,” +arXiv +preprint +cs/0406005, 2004. +[4] P. H. Hargrove and J. C. Duell, “Berkeley lab checkpoint/restart (blcr) +for linux clusters,” in Journal of Physics: Conference Series, vol. 46, +no. 1. +IOP Publishing, 2006, p. 067. +[5] S. K. Garg, C. S. Yeo, A. Anandasivam, and R. Buyya, “Environment- +conscious scheduling of hpc applications on distributed cloud-oriented +data centers,” Journal of Parallel and Distributed Computing, vol. 71, +no. 6, pp. 732–749, 2011. +[6] G. Aupy, M. Shantharam, A. Benoit, Y. Robert, and P. Raghavan, “Co- +scheduling algorithms for high-throughput workload execution,” Journal +of Scheduling, vol. 19, no. 6, pp. 627–640, 2016. +[7] M. Rodr´ıguez-Pascual, J. Cao, J. A. Mor´ı˜nigo, G. Cooperman, and +R. Mayo-Garc´ıa, “Job migration in hpc clusters by means of check- +point/restart,” The Journal of Supercomputing, vol. 75, no. 10, pp. 6517– +6541, 2019. +[8] R. Garg, T. Patel, G. Cooperman, and D. Tiwari, “Shiraz: Exploiting sys- +tem reliability and application resilience characteristics to improve large +scale system throughput,” in 2018 48th Annual IEEE/IFIP International +Conference on Dependable Systems and Networks (DSN). +IEEE, 2018, +pp. 83–94. +[9] E. N. Elnozahy and J. S. Plank, “Checkpointing for peta-scale systems: +A look into the future of practical rollback-recovery,” IEEE Transactions +on Dependable and Secure Computing, vol. 1, no. 2, pp. 97–108, 2004. +[10] F. Cappello, “Fault tolerance in petascale/exascale systems: Current +knowledge, challenges and research opportunities,” The International +Journal of High Performance Computing Applications, vol. 23, no. 3, +pp. 212–226, 2009. +[11] R. K. Sahoo, A. J. Oliner, I. Rish, M. Gupta, J. E. Moreira, S. Ma, +R. Vilalta, and A. Sivasubramaniam, “Critical event prediction for +proactive management in large-scale computer clusters,” in Proceedings +of the ninth ACM SIGKDD international conference on Knowledge +discovery and data mining, 2003, pp. 426–435. +[12] P. Yalagandula, S. Nath, H. Yu, P. B. Gibbons, and S. Seshan, “Be- +yond availability: Towards a deeper understanding of machine failure +characteristics in large distributed systems.” in WORLDS, 2004. +[13] J. W. Mickens and B. D. Noble, “Exploiting availability prediction in +distributed systems.” in NSDI, vol. 6, 2006, pp. 73–86. +[14] A. Nukada, H. Takizawa, and S. Matsuoka, “Nvcr: A transparent +checkpoint-restart library for nvidia cuda,” in 2011 IEEE International +Symposium on Parallel and Distributed Processing Workshops and Phd +Forum. +IEEE, 2011, pp. 104–113. +[15] A. Rezaei, G. Coviello, C.-H. Li, S. Chakradhar, and F. Mueller, +“Snapify: Capturing snapshots of offload applications on xeon phi many- +core processors,” in Proceedings of the 23rd international symposium on +High-performance parallel and distributed computing, 2014, pp. 1–12. +[16] N. El-Sayed and B. Schroeder, “Reading between the lines of failure +logs: Understanding how hpc systems fail,” in 2013 43rd annual +IEEE/IFIP international conference on dependable systems and net- +works (DSN). +IEEE, 2013, pp. 1–12. +[17] S. Ghiasvand, F. M. Ciorba, R. Tsch¨uter, and W. E. Nagel, “Lessons +learned from spatial and temporal correlation of node failures in high +performance computers,” in 2016 24th Euromicro International Confer- +ence on Parallel, Distributed, and Network-Based Processing (PDP). +IEEE, 2016, pp. 377–381. +[18] T. Kimura, A. Watanabe, T. Toyono, and K. Ishibashi, “Proactive failure +detection learning generation patterns of large-scale network logs,” +IEICE Transactions on Communications, 2018. +[19] S. Ghiasvand and F. M. Ciorba, “Anomaly detection in high performance +computers: A vicinity perspective,” in 2019 18th International Sympo- +sium on Parallel and Distributed Computing (ISPDC). +IEEE, 2019, +pp. 112–120. +[20] H. Fadishei, H. Saadatfar, and H. Deldari, “Job failure prediction in +grid environment based on workload characteristics,” in 2009 14th +International CSI Computer Conference. +IEEE, 2009, pp. 329–334. +[21] X. Chen, C.-D. Lu, and K. Pattabiraman, “Failure prediction of jobs +in compute clouds: A google cluster case study,” in 2014 IEEE In- +ternational Symposium on Software Reliability Engineering Workshops. +IEEE, 2014, pp. 341–346. +[22] T. Islam and D. Manivannan, “Predicting application failure in cloud: +A machine learning approach,” in 2017 IEEE International Conference +on Cognitive Computing (ICCC). +IEEE, 2017, pp. 24–31. +[23] D. Andresen, W. Hsu, H. Yang, and A. Okanlawon, “Machine learn- +ing for predictive analytics of compute cluster jobs,” arXiv preprint +arXiv:1806.01116, 2018. +[24] J. Li, G. Ali, N. Nguyen, J. Hass, A. Sill, T. Dang, and Y. Chen, +“Monster: An out-of-the-box monitoring tool for high performance +computing systems,” in 2020 IEEE International Conference on Cluster +Computing (CLUSTER). +IEEE, 2020, pp. 119–129. +[25] HPCC. +(2021) +High +Performance +Computing +Center. +[Online]. +Available: http:www.depts.ttu.edu/hpcc/ +[26] D. Technologies. (2021) Integrated Dell Remote Access Controller +(iDRAC). [Online]. Available: https://www.delltechnologies.com/en-us/ +solutions/openmanage/idrac.htm +[27] DMTF. (2021) DMTF’s Redfish®. [Online]. Available: https://www. +dmtf.org/standards/redfish +[28] U. +G. +Engine. +(2020) +Univa +Grid +Engine. +[Online]. +Available: +https://www.univa.com/ +[29] H. Li, D. Groep, L. Wolters, and J. Templon, “Job failure analysis +and its implications in a large-scale production grid,” in 2006 Second +IEEE International Conference on e-Science and Grid Computing (e- +Science’06). +IEEE, 2006, pp. 27–27. +[30] G. Engine. (2010) Grid engine Man Pages. [Online]. Available: +http://gridscheduler.sourceforge.net/htmlman/htmlman5/accounting.html +[31] B. Schroeder and G. A. Gibson, “A large-scale study of failures in high- +performance computing systems,” IEEE transactions on Dependable and +Secure Computing, vol. 7, no. 4, pp. 337–350, 2009. +[32] Wikipedia. (2021) Dummy variable (statistics). [Online]. Available: +https://en.wikipedia.org/wiki/Dummy variable (statistics) +[33] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, +O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg et al., +“Scikit-learn: Machine learning in python,” the Journal of machine +Learning research, vol. 12, pp. 2825–2830, 2011. +[34] Z. Zheng, L. Yu, W. Tang, Z. Lan, R. Gupta, N. Desai, S. Coghlan, and +D. Buettner, “Co-analysis of ras log and job log on blue gene/p,” in +2011 IEEE International Parallel & Distributed Processing Symposium. +IEEE, 2011, pp. 840–851. +[35] C. Di Martino, Z. Kalbarczyk, R. K. Iyer, F. Baccanico, J. Fullop, and +W. Kramer, “Lessons learned from the analysis of system failures at +petascale: The case of blue waters,” in 2014 44th Annual IEEE/IFIP +International Conference on Dependable Systems and Networks. IEEE, +2014, pp. 610–621. +[36] A. A. Hwang, I. A. Stefanovici, and B. Schroeder, “Cosmic rays don’t +strike twice: understanding the nature of dram errors and the implications +for system design,” ACM SIGPLAN Notices, vol. 47, no. 4, pp. 111–122, +2012. +[37] V. Sridharan, J. Stearley, N. DeBardeleben, S. Blanchard, and S. Guru- +murthi, “Feng shui of supercomputer memory positional effects in dram +and sram faults,” in SC’13: Proceedings of the International Conference +on High Performance Computing, Networking, Storage and Analysis. +IEEE, 2013, pp. 1–11. +[38] B. Nie, J. Xue, S. Gupta, C. Engelmann, E. Smirni, and D. Tiwari, +“Characterizing temperature, power, and soft-error behaviors in data +center systems: Insights, challenges, and opportunities,” in 2017 IEEE +25th International Symposium on Modeling, Analysis, and Simulation of +Computer and Telecommunication Systems (MASCOTS). +IEEE, 2017, +pp. 22–31. +[39] S. Fu, J. Liu, and H. Pannu, “A hybrid anomaly detection frame- +work in cloud computing using one-class and two-class support vector +machines,” in International conference on advanced data mining and +applications. +Springer, 2012, pp. 726–738. +[40] B. Nie, J. Xue, S. Gupta, T. Patel, C. Engelmann, E. Smirni, and +D. Tiwari, “Machine learning models for gpu error prediction in a +large scale hpc system,” in 2018 48th Annual IEEE/IFIP International +Conference on Dependable Systems and Networks (DSN). +IEEE, 2018, +pp. 95–106. + diff --git a/VtE4T4oBgHgl3EQfng0j/content/tmp_files/load_file.txt b/VtE4T4oBgHgl3EQfng0j/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2bcff54bb662a39b003e40dc9649a9e1b030d52 --- /dev/null +++ b/VtE4T4oBgHgl3EQfng0j/content/tmp_files/load_file.txt @@ -0,0 +1,955 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf,len=954 +page_content='Workload Failure Prediction for Data Centers Jie Li Department of Computer Science Texas Tech University Lubbock, USA jie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='li@ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Rui Wang Department of Computer Science Texas Tech University Lubbock, USA rui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='wang@ttuhsc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Ghazanfar Ali Department of Computer Science Texas Tech University Lubbock, USA Ghazanfar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Ali@ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Tommy Dang Department of Computer Science Texas Tech University Lubbock, USA tommy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='dang@ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Alan Sill High-Performance Computing Center Texas Tech University Lubbock, USA alan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='sill@ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Yong Chen Department of Computer Science Texas Tech University Lubbock, USA yong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='chen@ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu Abstract—Failed workloads that consumed significant com- putational resources in time and space affect the efficiency of data centers significantly and thus limit the amount of scientific work that can be achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' While the computational power has increased significantly over the years, detection and prediction of workload failures have lagged far behind and will become increasingly critical as the system scale and complexity further increase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In this study, we analyze workload traces collected from a production cluster and train machine learning models on a large amount of data sets to predict workload failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Our prediction models consist of a queue-time model that estimates the probability of workload failures before execution and a runtime model that predicts failures at runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Evaluation results show that the queue-time model and runtime model can predict workload failures with a maximum precision score of 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='61% and 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' By integrating the runtime model with the job scheduler, it helps reduce CPU time, memory usage by up to 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7% and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='53%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Index Terms—Data Center, Failure Prediction, Predictive An- alytic, Big Data, Machine Learning I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' INTRODUCTION The scale and complexity of many data centers have sig- nificantly increased over the years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Meantime, the demand from user community for computational and storage capability has considerably increased too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This combination of increased scale of data centers and size of workloads with different requirements and characteristics has resulted in growing node and workload failures, posing a threat to the reliability, avail- ability, and scalability (RAS) of data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' For example, all else being equal, a system that is 1,000 times more powerful will have at least 1,000 times more components and will fail 1,000 times more often [1], resulting in a long-running job utilizing a large amount of nodes being terminated due to frequent failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Therefore, over the past decades, various methods and algorithms were proposed to improve the system resilience and efficiency [2]–[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Reactive strategies, such as Checkpoint/Restart (C/R) [4], [7], are conventional approaches for fault tolerance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As an example, a reactive fault tolerance strategy for a node failure is to reschedule a workload to a new node and restart from a specific checkpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, checkpointing a job in a large- scale system could incur large I/O overhead when writing and reading workloads state [8], and takes an overhead of more than 15% of the total execution time [9], [10], which signif- icantly impedes science productivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As a result, researchers on failure management have found that prevention is better than cure and shifted to proactive management strategies [7], [11]–[15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In contrast to reactive strategies, proactive strategies develop models based on the failure data in data centers to predict node or workload failures in the near future and take preventive measures to improve the RAS of data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Numerous research efforts have developed node failure detection and prediction methods by utilizing temporal and/or spatial correlations of failures [16]–[19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' They usually investi- gate system behavior via Syslog analysis and have developed supervised and unsupervised approaches for predicting failures in data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A number of studies have attempted on workload-centric failure detection and prediction based on the resource usage or requested resources [20]–[23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, only limited amount of workload data is publicly available due to confidentiality or other reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, analyzing and extracting insightful knowledge from massive amounts of data is daunting, given the increasing scale and complexity of data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This research aims at using machine learning-based ap- proach to predict workload failures in data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In partic- ular, we investigate two months of workload traces collected from a production cluster in order to find correlation between workload attributes with exit status (including error status).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We seek to train supervised learning models to predict: (1) the failure probability of a workload at queue time, and (2) the likelihood of failure over the life-span of a workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Having the knowledge of whether jobs will likely to fail or not can be valuable for both users to be alerted of the potential failures and the resource manager (both the soft- ware and system administrators) to be proactive in preventing wasting computational resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Consequently, the RAS and productivity of data centers can be improved in return by better arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='05176v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='DC] 12 Jan 2023 managing the workloads that are likely to fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We make the following contributions in this study: We analyze workload traces collected from a production data center and perform an extensive characterization study of workload failure rates across nodes, users and different time scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We investigate the correlation be- tween workload characteristics and failures, and identify the relevant factors that lead to failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We apply several machine learning algorithms on our data set and train two prediction models: a Queue-time model and a Runtime model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We find that Random Forest achieved the best prediction performance in terms of Precision and F1 scores in both models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Experimental results show that these two models predict workload fail- ures with a maximum Precision of 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='61% and 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We quantify the resource savings achieved by applying the runtime prediction model on workloads at different times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Based on the prediction results, proactive failure management (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', killing workloads that are predicted to fail) can achieve CPU and memory savings by up to 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7% and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='53%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We investigate the effects of training data set size to find the optimum size that can achieve acceptable prediction performance with minimum training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Section II describes background of this research, including the moni- toring infrastructure, data points, and source of anomalies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In Section III, we analyze the workload data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Section IV describes the machine learning algorithms we have investi- gated in this research and explains our methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The experimental results are presented in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Section VI provides an overview of related work, and we conclude this research in Section VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' BACKGROUND This research study is conducted on a production data center called Quanah, where scientists from all major scientific fields, such as astrophysics, computational chemistry, bioinformatics, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', perform simulations and scientific computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In our previous work [24] , we have designed and implemented a monitoring, data collection and management infrastructure to gather workload and node metrics from the cluster in real time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The Quanah cluster and monitoring framework are described in the next section, followed by the analysis of sources of failures in common data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Quanah Cluster The Quanah cluster at High Performance Computing Center (HPCC) of Texas Tech University [25] is commissioned in early 2017 and expanded to its current size in 2019, which is comprised of 467 nodes with Intel XEON processors providing 36 cores per node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Quanah has a total of 16,812 cores with a benchmarked total computing power of 485 Teraflops/s and provides 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5 petabytes storage capability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The cluster is based on Dell EMC PowerEdge™ C6320 servers, which are Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1: Sources of Failures in Data Centers equipped with the integrated Dell Remote Access Controller (iDRAC) [26] providing Redfish API [27] for accessing Base- board Management Controller (BMC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The software environ- ment is based on CentOS 7 Linux, provisioned and managed by OpenHPC, and the cluster is operated with Univa Grid Engine (UGE) [28], setting up with multiple queues, with jobs sorted by projects to meet the needs of research activities for many fields and disciplines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Monitoring Infrastructure The monitoring data in Quanah cluster is obtained through an “out-of-the-box” monitoring tool [24] that utilizes the Redfish API to retrieve sensor data from the BMC on each compute node and the resource manager (such as UGE and Slurm) for workload information and resource usage data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sensor metrics and resource usage data are collected at node level in 60-second intervals, which include power usage, fan speed, CPU usage, memory usage and node-job correlations, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The time-series data is stored in a time-series database (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' InfluxDB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Workload information is derived from the UGE accounting data, which includes job submission time, start time, end time, total CPU time, integral memory usage, IO operations, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The workload data is stored in a MySQL database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' With several performance optimizations, such as op- timized database schema, using high-speed storage, concurrent processing and transmitting compressed data, our infrastruc- ture provides near real-time analysis and visualization of user- level and node-level status.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sources of Failures In a data center cluster where computing resources are shared by different workloads submitted by users from various domains, the number of failed jobs (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' workloads) can be large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' There are three main reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' First, domain scientists, while skilled in their scientific fields, do not always have sufficient experience and background in computing, especially in large-scale, parallel computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Second, diverse workloads depend on different libraries, and bugs and missing updates in dependent libraries can lead to unexpected failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Third, data center is complex, and any mis-configuration or hardware errors can cause workload termination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 1 summarizes the high-level root cause categories in data centers, where failures are attributed to user, application, system or hardware problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Users: insufficient resource request or wrong input in the job submission script can cause workloads to fail [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' User Application Wrong Input BUg/ Domain-specificLibraries JobSubmissionScript LinearAlgebra Libraries ParallelismRuntime Failures System Hardware Faulto/F@uno Ermors OperatingSystem CPU, GPU, Memory, Mlcomtouretong File System Disk, NIC etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='TABLE I: Features of Workloads ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Feature ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Type ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Description ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='job id ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Job identifier ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='owner ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Categorical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Owner of the job ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='group ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Categorical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The group id of the job owner ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='job name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Categorical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Job name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='granted pe ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Categorical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The parallel environment ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='hostname ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Categorical ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Name of the execution host ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='submission ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Submission time (in epoch time format) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='start time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Start time (in epoch time format) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='end time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='End time (in epoch time format) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='wallclock ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Difference between end and start time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='cpu ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The CPU time usage in seconds ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='mem ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The integral memory usage in Gbytes1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='io ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The amount of data transferred in Gbytes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='iow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The io wait time in seconds ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='maxvmem ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The maximum vmem size in bytes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='slots ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='The number of parallel processes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='wait time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Difference between start and submit time ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='exit status ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Numeric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Exit status of the job script ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 The sum of the amount of memory used in each time interval ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='for the life of the job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' TABLE II: Exit Status Summary for Failed Workloads Exit Code Meaning Number Percentage 1 Miscellaneous errors 21367 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='16% 2 Missing keyword or command 3032 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='25% 7 Argument list too long 6549 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='83% 127 Command not found 528 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='44% 137 (System signal 9) Kill 190 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='52% 255 Exit status out of range 4598 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='52% Others 475 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='28% Note that user interruptions, such as issuing a command like “scancel”, can interrupt a running workload and cause a failure too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, since the effect of cancelling a running workload is obvious to the user, we do not categorize it as a source of failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Applications: mis-configured applications increase the risk of poor performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, buggy codes, missing dependent library updates, and/or bugs can cause applications to terminate unexpectedly too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Systems: mis-configurations or failures of system re- sources and components may considerably affect the performance of workloads or cause failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Hardware: hardware errors are one of the most devastat- ing issues for data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Severe hardware issues can lead to the malfunction of the entire system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Events such as memory hardware errors, CPU overheating, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', will result in workload errors and crashes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' WORKLOAD ANALYSIS To predict the workload failures in data centers, it is crucial to understand the characteristics of failed workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In this section, we first present an overview of the workload trace, then quantitatively analyze the percentage of failures in the workloads and the computational resource consumption characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' After that, we further study the failure rate across the nodes, users, and different time scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 2: Proportion and Resource Consumption Characteristics of workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Red indicates failed workloads and dark blue indicates successful and cancelled workloads (non-failures).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Workload Overview The workload trace is derived from job accounting data collected from the Quanah cluster for the period of August 1, 2020 to October 1, 2020, which contains 324,358 instances submitted by 204 unique users (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' owners).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Notice that workloads that cannot be started on the execution host (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', because the user does not have a valid account on that node [30]) are recorded in the raw job accounting data, and we drop these entries because they are killed by the job scheduler for reasons that are not part of the sources of failures summarized above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Additionally, they do not consume compute resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Table I lists 18 selected features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' These features can be categorized into two groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The first group includes categorical features, such as owner, group, and job name.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The other group includes numeric features, such as CPU time usage and integral memory usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' When a batch job exits, the scheduler (in our case UGE) generates an exit_status field in the job accounting data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' According to the UGE documentation, a general exit status convention is defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' An exit status of 0 indicates a successful workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' If the command terminates normally, the exit status is the value of the command in the job script, which is in line with normal shell conventions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In the case of the script command exits abnormally, a value of 128 is added to the value of the command.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Thus, the exit status ≥ 128 can be decomposed into 128+a system signal, where the system signal value can be a fatal error signal such as 6 (SIGABRT), 9 (SIGKILL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We summarize the exit status that indicates failure in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We find that the most common exit status was 1 (58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='16%), indicating that there are miscellaneous errors causing the failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The next most significant exit status is 7 (17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='83%), which occurs anytime a user feeds too many arguments in the job submission script.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We also notice that there are 190 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='52%) workloads that are killed by users through system signal 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Since we do not consider cancelled workloads as failures, we drop these 190 workloads with exit status 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, we do not intent to predict exact errors, so we convert all non-zero exit status to 1, representing workloads that face problems during run time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We use exit status to distinguish workloads that had completed successfully (exit status as 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='2% 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='8% 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='9% In Quantity In CPU Time In Integral Memory Usage(a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 3: Number and percentage of workload failures per node distributed by node ID (a) and by physical location (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In sub-figure (a), red indicates failed workloads and dark blue indicates successful workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In sub-figure (b), the darkness of the color represents the workload failure rate (in other words, the darker the color means the higher the workload failure rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Proportion of workload failures Figure 2 presents the proportion and resource consumption characteristics of workload failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in the figure, workload failure rate is 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5% for all submitted jobs (includ- ing successful and failed workloads) in quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We further analyze the CPU time consumed by failed workloads and find that failed workloads cost 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1% of the total CPU time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The proportion of CPU time for failed workloads is larger than the proportion of the number of failed workloads, indicating that the more processors a workload uses and the longer it runs, the higher the probability that this workload will fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Additionally, we quantify the integral memory usage consumed by failed workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Figure 2, the wasted memory resource rate is 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='2%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' All these statistics imply that failed workloads waste significant amount of computational resources and there- fore degrade the system efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Distribution by nodes We depict the distribution of failures across the nodes of the system by node ID and physical locations in Figure 3a (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 4: Number and percentage of workload failures distributed by user ID (a) and by wallclock (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' and Figure 3b, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 3a shows the total number and percentage of workload failures for each node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We first observe that about 20 nodes serve a relatively large number of workloads than the other nodes, while some nodes have more than 60% of workload failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' It is worth noting that the nodes serving a large number of workloads do not necessarily have a higher percentage of workload failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A possible explanation is that nodes with high workload failure rate may have node- specific (hardware or operating system) vulnerabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' These 467 nodes in Quanah cluster are hosted in 10 racks and each node can be uniquely addressed by rack and chassis number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Each column shown in Figure 3b represents one rack of nodes and each row represents one chassis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' From Figure 3b, we observe that nodes in rack 1, 3 and 7 have relatively high workload failure rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Since the power, temperature and connectivity of all nodes located in a rack are controlled together, problems in these areas can cause failures to occur in physical location vicinity [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Distribution by users To find out the correlation between users and failed work- loads, we plot the workload distribution by user ID, as shown in Figure 4a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The total number of jobs submitted by users ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='104 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='of Jobs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='103 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='# ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='102 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Percentage(%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Node iD100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='15 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='S ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='aSsi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='50% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='a ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='h ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='C ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='35 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='25% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='45 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='55 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='T ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='T ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Rack of Jobs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='102 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='# ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Percentage(%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='25 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='125 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='150 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='175 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='User ID# of Jobs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Failure ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Percentage(%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='36000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='72000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='108000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='144000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='180000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Wallclock (seconds)(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='(b) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 5: Number and percentage of workload failures distributed by hour of the day (a) and by the day of the week (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' ranges from a few to over 10,000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We observe that the failure rate of workloads per user varies significantly and those users who submit a small number of workloads have a large portion of failed workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' These statistics suggest that users’ experience in properly configuring their applications and/or requesting computational resources varies widely and that these inexperienced users contribute a large fraction of failed workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Distribution by time The wallclock is the actual time taken from the start to the end of a workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' On Quanah cluster, if the user does not specify a runtime in the script, the job scheduler has a default runtime limit of 172,800 seconds (48 hours) for each submitted job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We illustrate the distribution of workloads by wallclock in Figure 4b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In general, the number of workloads decreases as the wallclock increases, and there is no significant correlation between the failure rate of workloads and the number of workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, we observe a reverse correlation between 24,000s and 84,000s: high workload intensities are associated with low failure rates and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, the failure rate appears to be high around 144,000s as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 6: Workflow of Predicting Workload Failures in Data Centers It is commonly known that the usage pattern of data centers fluctuates with time [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 5 categorizes the workload failures by the hour of a day and by the day of a week.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We observe that there is a slight correlation between failure rate and time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' During the time when the highest and the lowest number of jobs occur, the failure rate is below 5%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, during the rest of time, the failure rate is between 10% and 22%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A possible explanation for this observation is that experienced users submit large array jobs that contribute to the majority of the workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Furthermore, their code is robust and their applications are well configured, resulting in low failure rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' When the workload intensity is low, the system is less vulnerable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The failure rate for each day of a week shows similar results: the highest workload volumes resulted in the lowest failure rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' However, during the rest of days of a week, the workload failure rate does not change much.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' METHODOLOGY In this section, we describe the workflow of predicting workload failures in data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Figure 6, the workflow consists of four phases: (1) data collection: collecting metrics from data centers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2) data preparation: preprocessing the data into a structured format and extracting features for machine learning models;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (3) model training: training Queue-time and Runtime models using machine learn- ing algorithms;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (4) remediation management: applying remedi- ation management techniques to leverage the prediction results to optimize the management of data centers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Data collection has already been discussed in Section II-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Therefore, we focus on data preparation and model training in this research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We leave the failure remediation management and data center optimizations as near-future research work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Data Preparation 1) Preprocessing: In our current design and implementa- tion, the job accounting data is stored in a MySQL database;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' we perform a select operation with start and end times to select the data collected from August 1th, 2020 to October 1th, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The data is then saved into a dataframe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In the data preprocessing phase, we convert raw features into a format more suitable for machine learning training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Specifically, we create dummy variables for the categorical variables by using one-hot encoding [32], and scale the numerical features to avoid features with high variability from having more influence in the prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Success 100 k of Jobs Failure 75 k 50 k # 25 k 0 100 Percentage(%) 80 60 40 20 0 10 15 5 20 0 Hour of DaySuccess 150 k Jobs Failure 100 k # 50 k 0 100 80 60 40 20 0 Sun Thur Sat Mon Wed Tue Day of WeekE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', notification of QueueTimeModel Potential Failures Data Collection Data Preparation Runtime Model E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', preactively killing predicted failures Failure Prediction Failure Remediation Models Training ManagementAnother important design consideration in data preparation is to deduct irrelevant attributes and derive appropriate features from the original attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Irrelevant attributes add extra dimensions to the data set and can distract machine learning algorithms from achieving accurate prediction rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' On the other hand, deriving proper combinational features can boost the prediction accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In our case, we drop the feature job id because it is assigned by the job scheduler and does not reveal the characteristics of the workload.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We derive hours of the day and days of the week from time-related features to augment the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We also derive several numeric features from the resource usage data, such as CPU intensity and average memory usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' CPU intensity is defined as (cpu/slots)/wallclock, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', the ratio of the CPU time of a workload’s single processor to its overall wallclock (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' runtime).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The average memory usage is defined as mem/wallclock, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', the ratio of the integral memory usage of a workload to its run time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, the collected data does not contain information about applications and libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To overcome this limitation, we apply Natural Language Processing (NLP) techniques to job names and identify similar job names submitted by the same user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We then assign a uniform name to these workloads as their job names.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This process aims to categorize workloads that use the same libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' An underline hypothesis of this process is that workloads with similar job names submitted by the same user tend to be the same applications and use the same libraries, differing only in parameters or parts of the code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As discussed in Section III-A, we do not intend to predict the exact workload error, therefore the prediction is a binary classification problem (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', success or failure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We convert exit status to 1 if the workload fails and 0 otherwise, and use this as the class label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 2) Features Selection: Predicting workload failures pro- vides input for failure remediation management, where possi- ble techniques include: 1) notifying users of potential work- load failures after job submission but before execution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 2) making better scheduling decisions based on the prediction, thereby encouraging users to improve code quality and request appropriate computational resources;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' and 3) killing workloads that will fail before wasting too many computational resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To this end, we plan to train two models, one for predicting pre-run failures (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', queue-time model) and the other for predicting runtime failures (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', runtime model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Queue-time model and runtime model are trained with features available in different job states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The queue-time model is trained with categorical features such as owner, group, job name, department, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The runtime model uses not only categorical features but also resource usage features, such as CPU time, integral memory usage, data transferred in IO, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Note that in our case, resource request data, such as estimated job running time and expected maximum memory needed during runtime, are not recorded in the job accounting data, slightly limiting the available features that can be used to train the queue-time model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' For other data sets that include resource request information, the queue-time model can be enhanced and its prediction results should be more accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Model Training We use five classification algorithms to implement machine learning models and train them with our 324,358 instances to predict workload failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' These algorithms and the corre- sponding hyper-parameters are described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gaussian Naive Bayes: Naive Bayes is a probabilistic machine learning algorithm based on the Bayes Theorem, which is a simple mathematical formula for calculating con- ditional probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In our implementation, we use Gaussian Naive Bayes (GNB) (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', Naive Bayes extended to real-valued attributes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' It is easy to implement because we only need to estimate the mean and standard deviations of the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Guassian NB does not accept parameters, except for the priors parameter, which we use the default value of “None” in our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Logistic Regression: Logistic Regression (LR) is a classifi- cation algorithm for finding the relationship between features and outcome probabilities and is the most widely used machine learning algorithm in classification problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' It is relatively fast compared to other supervised classification techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Since we do not predict the exact value of the exit status, we use Binomial Logistic Regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Logistic Regression does not actually have any critical hyper-parameters to tune.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We set the inverse regularization parameter (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' C) to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1 and choose “l2” as the penalty parameter and “liblinear” as the solver parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Linear Discriminant Analysis: Linear Discriminant Anal- ysis (LDA), as the name implies, is most commonly used as a dimensionality reduction technique, but it can also be used as a classification tool by finding linear combinations of features that separate two or more classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' LDA works by calculating summary statistics, such as mean and standard deviation, of input features by class label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Predictions are performed by estimating the probability that a new instance belongs to each class label based on the values of each feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We set the solver to “lsqr”, which performs best in our data set compared to other built-in solvers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Decision Tree: Decision Tree (DT) is a predictive model that predicts value by learning decision rules inferred from data features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' One of the advantages of this algorithm is that the non-linear relationship between features does not affect the performance of the tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' It can handle both categorical and numeric data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The criterion parameter in the DT is set to “gini” and the splitting parameter is set to “best”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' All other parameters are kept as default.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Random Forest: Random Forest (RF) is an ensemble method that consists of a large number of individual decision trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' It uses bagging and feature randomness in the construc- tion of each tree to create a forest of uncorrelated trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Each individual tree in the random forest produces a class prediction and the class with the most votes will be the predicted value of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As with the Decision Tree, we set the criterion parameter to “gini” instead of “entrophy”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The number of random features (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' max_features) considered in each split is set to “sqrt”, which is usually good for classification problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The rest of the parameters are left unchanged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Since our data set is very large, we use the holdout method instead of the cross-validation method to save computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The data set is partitioned into 65% training data, 15% validation data and 20% testing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The training set learns the relationship between the features and the target variables (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 0 for success and 1 for failure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The validation set is used to check how accurately the model defines the relationship be- tween features and known outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The testing data provides a final estimate of the model performance after the model has been trained and validated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' EXPERIMENTAL RESULTS In this section, we describe the evaluation metrics we used for our experiments and present the experimental results including the performance of the machine learning algorithms described above and the potential resource savings that benefit from the prediction, followed by an evaluation of the impact of the training sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The models in this study are implemented in the scikit-learn [33] Python library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Evaluation Metrics 1) Prediction Metrics: In order to measure the performance of ML algorithms, it is important to specify evaluation metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We use recall (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', true positive rate), precision and F1 Score as our measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Recall represents the ratio between the number of correctly predicted failed workloads to the total number of actual failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Precision is calculated by dividing the total number of predicted failures with the number of correctly predicted failed workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' F1 score is the weighted average of recall and precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A higher score for these three metrics means that the model’s classification results are more accurate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' These measurements are shown below: recall = # of Correctly Predicted Failures Total # of Actual Failures (1) precision = # of Correctly Predicted Failures Total # of Predicted Failures (2) F1 Score = 2 ∗ (recall ∗ precision) recall + precision (3) 2) Resource Savings Metrics: The basic proactive failure remediation management is to simply kill workloads that are predicted to fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This strategy is sensitive to false positive, where workloads are incorrectly predicted to fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Killing workloads inappropriately will result in wasted resources, as the killed workloads will be restarted and run at a later time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Therefore, We define the resource saving (Rsaving) as: Rsaving = Rs − Rw Rtotal , (4) where Rtotal is the total resources consumed by failed and successful workloads, Rs is the resource saved by proactively killing failed workloads, and Rw is the resource wasted by killing successful workloads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Failure Prediction Table III presents the performance of the queue-time model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Specifically, we observe that Gaussian Naive Bayes (GNB) achieves the highest recall score of 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='44%;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Random Forest (RF) performs the best with a precision score of 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='61% and an F1 score of 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='71%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The performance of the runtime model are shown in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Again, RF achieves the best performance with a precision of 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75% and an F1 score of 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='91%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Although GNB achieves the highest recall score, its precision score is the lowest, indicating a low number of successful failure predictions in its total failure predictions and it predicts most successful workloads as failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' When evaluating the overall performance, we choose RF as the classification algorithm for both models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' TABLE III: Performance of Queue-Time Model Model recall precision F1 Score Training Time(s) GNB 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='44 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='65 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='04 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5 LR 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='22 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='16 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='77 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='46 LDA 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='14 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='92 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='14 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='12 DT 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='02 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='33 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='06 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='96 RF 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='61 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='71 149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='81 TABLE IV: Performance of Runtime Model Model recall precision F1 Score Training Time(s) GNB 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='44 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='65 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='05 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='13 LR 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='13 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='48 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='52 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='32 LDA 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='92 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='64 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='45 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7 DT 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='92 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='57 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='24 173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='08 RF 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='14 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='91 145.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='71 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Resource Savings The numeric features such as CPU time and integral mem- ory usage are only known after the workload has completed execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This fact may raise the question of how we can estimate resource savings at different run times of a workload and use these features to predict workload failures early.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To apply the run time model on a running workload, we make the following assumption: resource usage is linearly proportional to run time, so its resource usage at different times can be calculated as: CRU = FRU ∗ Time Wallclock , (5) where CRU stands for Current Resource Usage and FRU stands for Final Resource Usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Based on this formula, we generate a series of test data sets from the original test data (20% of 2-month workload traces in Quanah cluster, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', 12- day workload traces) containing synthetic resource usage at different times, and then, apply the runtime model on these data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 7a shows the resource savings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' From this figure, we observe the same pattern of savings in CPU time and integral memory usage;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' they both achieve the highest savings at the beginning of the time, 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7% and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='53%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Overall, the resource savings decrease over time, except for a few ups and downs at around 4200s and 14400s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To understand (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 7: Resource Savings in percentage (a) and CPU Time Savings in Node Days (b) at different times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' the resource savings, we convert the CPU time savings in seconds to CPU time savings in node· days, where the node has 36 CPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Figure 7b, the maximum CPU time savings is about 250 node· days.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In other words, applying the runtime model on 12-day workloads of a 467-node cluster will help save the CPU time (and associated power consumption) of a node running for 250 days.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To better understand the resource savings, we plot the number of workloads and prediction performance at different times, as shown in Figure 8a and Figure 8b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 8a shows that the total number of workloads participating in the failure remediation management decreases exponentially and some workloads complete before the runtime model is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Therefore, the resource savings that can be achieved decrease with time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Figure 8b presents the recall, precision, and F1 scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Throughout time, the precision scores are at high values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The recall and F1 scores are decent (both scores are above 72%) although some fluctuations exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Effect of Training Size Even though we achieve a promising prediction accuracy using Random Forest, the training time is long because of using a large amount of workload traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Table III and Table IV, the training time in both models are about 150s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 8: Number of workloads (a) and Prediction Performance (b) at different times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In order to find the optimum training size that achieves a balance between prediction accuracy and training time, we build the prediction models using different training sizes in the range of 1 day to 60 days of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Figure 9a and Figure 9b, the precision, recall and F1 scores of the queue- time model and the runtime model do not improve significantly after the training size exceeding 30 days of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' With the training size of 30 days of data, the training time shortens to 67 seconds, which is acceptable to many data centers to conduct workload failure predictions periodically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' RELATED WORK Characterizing and quantifying failures in data centers are invaluable for system administrators to understand the behav- ior of the systems and thus develop strategies to improve the RAS of the systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Many prior works have investigated failures on large-scale systems [16], [17], [19], [20], [31], [34], [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' For example, Fadishei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [20] analyzed workload traces in grid environment and discovered correlations between failure characteristics and performance metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Schroeder et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [31] examined statistics on failure data collected at two large HPC sites and discovered temporal and spatial correla- tions of failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Zheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [34] presented a co-analysis of RAS and job logs that helps in understanding failure patterns CPUTime 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Integral Memory Usage 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 600 9600 45600 54600 18600 27600 36600 Time (Seconds)250 CPUiTime 200 150 100 50 600 9600 18600 27600 36600 45600 5 54600 Time (Seconds)Total Workloads 25000 True Predicted Failures False Predicted Failures # of workloads 20000 15000 10000 5000 0 600 9600 18600 27600 36600 45600 54600 Time (Seconds)100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 %) 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Performance ( 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Recall Precision 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 F1 Score 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 600 9600 18600 27600 36600 45600 54600 Time (Seconds)(a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 9: Training Size vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Prediction Performance of Queue- time Model (a) and Runtime Model (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' and system/user behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' There are also studies that looked specifically into the reliability of particular component such as DARMs, disks and GPUs [36]–[38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Considering the failure characteristics and the correlations between failures and job types, performance metrics and com- ponents, several studies investigated machine learning models to predict failures on large-scale systems [21], [22], [39], [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [39] proposed a hybrid failure detection framework using one-class and two-class support vector machines (SVM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [21] proposed a prediction method based on Recurrent Neural Network (RNN) that predicts application failure in cloud using the Google cluster workload traces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tariqul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [22] developed a similar approach like Chen’s by using Long Short-Term Memory Network (LSTM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Many of the proposed approaches are limited to certain performance metrics, such as studies based on Google cluster workload traces [21], [22], or are limited to certain components of the system, such as studies focused on GPUs [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The drawback of these mentioned approaches is that they ignore the human factors that lead to failures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' As shown in Section III-D and Section III-E, there are correlations between failures and user behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A well-trained and experienced user can potentially produce less failure jobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' The proposed approach in this work considers not only performance metrics, but also user behavior in the prediction models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, the proposed approach does not rely on complex system logs collection and analysis;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' it utilizes job accounting data that is available in all resource managers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Therefore, the prediction models and failure remediation mechanisms (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' killing predicted failures) are easier to integrate into resource managers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' CONCLUSIONS AND FUTURE WORK In this study, we have analyzed two months of job account- ing data collected from a production data center and found that failed workloads accounted for 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='5% of total workloads, consumed 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='1% of the total CPU time and 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='2% of the integral memory usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, we have quantified the workload failure rates across nodes, users, and different time scales, and we have analyzed the correlation between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Based on the comprehensive understanding of workload traces, we develop two prediction models (queue-time model and run- time model) with five machine learning algorithms and have found that Random Forest performed the best with 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='61% and 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='75% precision scores, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We further explored the training size and its impact on prediction performance and training time, and we concluded that 30 days of job data is the optimal training size, with 67 seconds of training time for our data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Our experimental results show that the workload failure prediction model can help save CPU time and integrated memory usage by up to 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='7% and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='53%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nevertheless, our study can be further improved in several aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' First, due to the lack of resource usage data for work- loads at different runtimes, we had to create synthetic data to quantify the resource savings gained from the runtime model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' This approach may not be representative of all situations and the accuracy of the predictions may not be as high as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Second, because resource request information is an important factor in predicting workload failure, the lack of this feature prevented our model from achieving more accurate predic- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Third, the prediction models only predict the probability of workload failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Even though we have achieved promising performance, we cannot infer the causality of workload failure based on the available data since correlation does not imply causality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' To further support causality identification, we plan to develop a provenance based approach for failure predictions in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In large-scale data centers, where workload failures become the norm, proactive failure management is critical to improve system reliability, availability, and scalability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In future work, we plan to improve the prediction by adding more features in the training data, such as hardware monitoring metrics and system logs, and explore other machine learning algorithms, such as LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' In addition, understanding the causality of workload failures is important for both system administrators and users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' We hope to conduct causal inference studies when the detailed provenance is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Moreover, failure-aware resource scheduling is also a promising research direction and deserves further studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 140 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Training Time (Seconds) 120 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 100 80 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 60 Recall 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 40 Precision 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 F1-Score 20 Trainingi Time 0 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 0 5 10 15 20 25 3035 40 45 50 55 60 Training Size (Days)140 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Training Time (Seconds) 120 Performance (%) 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 100 80 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 60 Recall 40 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 Precision F1 Score 20 TrainingiTime 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='00 0 0 5 10 15 2025 30 35 40 ¥45 50 55 60 Training Size (Days)REFERENCES [1] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Cappello, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Al, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gropp, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kale, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kramer, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Snir, “Toward exascale resilience: 2014 update,” Supercomputing Frontiers and Innovations: an International Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 5–28, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [2] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Candea, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Brown, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fox, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Patterson, “Recovery-oriented computing: Building multitier dependability,” Computer, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 37, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 60–67, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [3] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Candea, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kawamoto, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fujiki, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Friedman, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fox, “Microreboot–a technique for cheap recovery,” arXiv preprint cs/0406005, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [4] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Hargrove and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Duell, “Berkeley lab checkpoint/restart (blcr) for linux clusters,” in Journal of Physics: Conference Series, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 46, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IOP Publishing, 2006, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 067.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [5] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Garg, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Yeo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Anandasivam, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Buyya, “Environment- conscious scheduling of hpc applications on distributed cloud-oriented data centers,” Journal of Parallel and Distributed Computing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 71, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 732–749, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [6] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Aupy, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Shantharam, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Benoit, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Robert, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Raghavan, “Co- scheduling algorithms for high-throughput workload execution,” Journal of Scheduling, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 627–640, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Rodr´ıguez-Pascual, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Cao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Mor´ı˜nigo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Cooperman, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Mayo-Garc´ıa, “Job migration in hpc clusters by means of check- point/restart,” The Journal of Supercomputing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 75, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 6517– 6541, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Garg, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Patel, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Cooperman, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tiwari, “Shiraz: Exploiting sys- tem reliability and application resilience characteristics to improve large scale system throughput,” in 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 83–94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [9] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Elnozahy and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Plank, “Checkpointing for peta-scale systems: A look into the future of practical rollback-recovery,” IEEE Transactions on Dependable and Secure Computing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 97–108, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [10] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Cappello, “Fault tolerance in petascale/exascale systems: Current knowledge, challenges and research opportunities,” The International Journal of High Performance Computing Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 212–226, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sahoo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Oliner, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Rish, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gupta, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Moreira, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ma, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Vilalta, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sivasubramaniam, “Critical event prediction for proactive management in large-scale computer clusters,” in Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, 2003, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 426–435.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Yalagandula, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nath, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Yu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gibbons, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Seshan, “Be- yond availability: Towards a deeper understanding of machine failure characteristics in large distributed systems.” in WORLDS, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Mickens and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Noble, “Exploiting availability prediction in distributed systems.” in NSDI, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 6, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 73–86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nukada, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Takizawa, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Matsuoka, “Nvcr: A transparent checkpoint-restart library for nvidia cuda,” in 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 104–113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [15] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Rezaei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Coviello, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Chakradhar, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Mueller, “Snapify: Capturing snapshots of offload applications on xeon phi many- core processors,” in Proceedings of the 23rd international symposium on High-performance parallel and distributed computing, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' El-Sayed and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Schroeder, “Reading between the lines of failure logs: Understanding how hpc systems fail,” in 2013 43rd annual IEEE/IFIP international conference on dependable systems and net- works (DSN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1–12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ghiasvand, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ciorba, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tsch¨uter, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nagel, “Lessons learned from spatial and temporal correlation of node failures in high performance computers,” in 2016 24th Euromicro International Confer- ence on Parallel, Distributed, and Network-Based Processing (PDP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 377–381.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [18] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kimura, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Toyono, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ishibashi, “Proactive failure detection learning generation patterns of large-scale network logs,” IEICE Transactions on Communications, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [19] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ghiasvand and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ciorba, “Anomaly detection in high performance computers: A vicinity perspective,” in 2019 18th International Sympo- sium on Parallel and Distributed Computing (ISPDC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 112–120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [20] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fadishei, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Saadatfar, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Deldari, “Job failure prediction in grid environment based on workload characteristics,” in 2009 14th International CSI Computer Conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 329–334.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [21] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Lu, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Pattabiraman, “Failure prediction of jobs in compute clouds: A google cluster case study,” in 2014 IEEE In- ternational Symposium on Software Reliability Engineering Workshops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 341–346.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [22] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Islam and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Manivannan, “Predicting application failure in cloud: A machine learning approach,” in 2017 IEEE International Conference on Cognitive Computing (ICCC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 24–31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [23] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Andresen, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Hsu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Yang, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Okanlawon, “Machine learn- ing for predictive analytics of compute cluster jobs,” arXiv preprint arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='01116, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [24] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Li, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Ali, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nguyen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Hass, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sill, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Dang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Chen, “Monster: An out-of-the-box monitoring tool for high performance computing systems,” in 2020 IEEE International Conference on Cluster Computing (CLUSTER).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 119–129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [25] HPCC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2021) High Performance Computing Center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: http:www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='depts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='ttu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='edu/hpcc/ [26] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Technologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2021) Integrated Dell Remote Access Controller (iDRAC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='delltechnologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='com/en-us/ solutions/openmanage/idrac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='htm [27] DMTF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2021) DMTF’s Redfish®.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' dmtf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='org/standards/redfish [28] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2020) Univa Grid Engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='univa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='com/ [29] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Groep, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Wolters, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Templon, “Job failure analysis and its implications in a large-scale production grid,” in 2006 Second IEEE International Conference on e-Science and Grid Computing (e- Science’06).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 27–27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [30] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2010) Grid engine Man Pages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: http://gridscheduler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='sourceforge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='net/htmlman/htmlman5/accounting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='html [31] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Schroeder and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gibson, “A large-scale study of failures in high- performance computing systems,” IEEE transactions on Dependable and Secure Computing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 337–350, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [32] Wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' (2021) Dummy variable (statistics).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Available: https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content='org/wiki/Dummy variable (statistics) [33] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Pedregosa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Varoquaux, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gramfort, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Michel, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Thirion, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Grisel, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Blondel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Prettenhofer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Weiss, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Dubourg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=', “Scikit-learn: Machine learning in python,” the Journal of machine Learning research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 2825–2830, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [34] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Yu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Lan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gupta, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Desai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Coghlan, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Buettner, “Co-analysis of ras log and job log on blue gene/p,” in 2011 IEEE International Parallel & Distributed Processing Symposium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 840–851.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [35] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Di Martino, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kalbarczyk, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Iyer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Baccanico, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fullop, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Kramer, “Lessons learned from the analysis of system failures at petascale: The case of blue waters,” in 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 610–621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [36] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Hwang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Stefanovici, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Schroeder, “Cosmic rays don’t strike twice: understanding the nature of dram errors and the implications for system design,” ACM SIGPLAN Notices, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 47, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 111–122, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [37] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Sridharan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Stearley, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' DeBardeleben, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Blanchard, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Guru- murthi, “Feng shui of supercomputer memory positional effects in dram and sram faults,” in SC’13: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 1–11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [38] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Xue, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gupta, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Engelmann, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Smirni, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tiwari, “Characterizing temperature, power, and soft-error behaviors in data center systems: Insights, challenges, and opportunities,” in 2017 IEEE 25th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 22–31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [39] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Fu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Liu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Pannu, “A hybrid anomaly detection frame- work in cloud computing using one-class and two-class support vector machines,” in International conference on advanced data mining and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Springer, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 726–738.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' [40] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Nie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Xue, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Gupta, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Patel, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Engelmann, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Smirni, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' Tiwari, “Machine learning models for gpu error prediction in a large scale hpc system,” in 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} +page_content=' 95–106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf'} diff --git a/XNE3T4oBgHgl3EQfFwkF/vector_store/index.faiss b/XNE3T4oBgHgl3EQfFwkF/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..f446b1c0e0415813ba9c3086aa32d74ad4ed2ab5 --- /dev/null +++ b/XNE3T4oBgHgl3EQfFwkF/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ca039cd027a723271a3454df7c9836e05754a79c74e1be6a7f486a4b7c0fcb1 +size 2883629 diff --git a/Z9FRT4oBgHgl3EQfQDdI/content/2301.13520v1.pdf b/Z9FRT4oBgHgl3EQfQDdI/content/2301.13520v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..562fc9b3abd1c9c08a61d3b76802f4bdfa44260c --- /dev/null +++ b/Z9FRT4oBgHgl3EQfQDdI/content/2301.13520v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f04ca44ca862e0e555f7db08b9b93d7f0a4a3fc6712552947492c36a9ce50c59 +size 239899 diff --git a/Z9FRT4oBgHgl3EQfQDdI/vector_store/index.pkl b/Z9FRT4oBgHgl3EQfQDdI/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..bbbdd135cb9d0b6d7ca3e332ee4819148e1681ed --- /dev/null +++ b/Z9FRT4oBgHgl3EQfQDdI/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b88d90f691b1ae48e39d0f7dbdb4565d2562bb49deae66b4b48f85386d705139 +size 125015 diff --git a/ZNE3T4oBgHgl3EQfcgp3/vector_store/index.faiss b/ZNE3T4oBgHgl3EQfcgp3/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..bd94780d50629d449c4a64faea3ebf63f4469c35 --- /dev/null +++ b/ZNE3T4oBgHgl3EQfcgp3/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cd2c8457d65e785e0a8b0375abafb06ead27bccdbc9aa9c4aee3583f51346ec +size 720941 diff --git a/ZdFJT4oBgHgl3EQf7i0F/content/2301.11678v1.pdf b/ZdFJT4oBgHgl3EQf7i0F/content/2301.11678v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0e2867d26b2ded4e5ecadd290489e983f1954e54 --- /dev/null +++ b/ZdFJT4oBgHgl3EQf7i0F/content/2301.11678v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e9e0c81056ee80c0584ab2975b966e44365ef75d10238fed4efcc0c87164267 +size 330516 diff --git a/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.faiss b/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..b29967f885b2bd9de1a369b5e3f00f19c3240394 --- /dev/null +++ b/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:88dec637f7a2416174707de61c74b3a90de73f8f25386e11f6d5f09114b6c084 +size 3801133 diff --git a/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.pkl b/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..409683dc039f420a7dc85cc9c5ef7687f4a544d6 --- /dev/null +++ b/ZdFJT4oBgHgl3EQf7i0F/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f12dfdf66f3e0b19690252504910f8c28c0db57f33d83408df8c761c91ba9fb +size 146081 diff --git a/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/2301.02177v1.pdf.txt b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/2301.02177v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..50522bb5928b12000928e0d0c9c9dbfefc52c570 --- /dev/null +++ b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/2301.02177v1.pdf.txt @@ -0,0 +1,979 @@ +arXiv:2301.02177v1 [math.CO] 5 Jan 2023 +THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG +LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL +Abstract. Schur functions are a basis of the symmetric function ring that represent Schubert cohomology +classes for Grassmannians. Replacing the cohomology ring with K-theory yields a rich combinatorial theory +of inhomogeneous deformations, where Schur functions are replaced by their K-analogues, the basis of +symmetric Grothendieck functions. We introduce and initiate a theory of the Kromatic symmetric function +XG, a K-theoretic analogue of the chromatic symmetric function XG of a graph G. The Kromatic symmetric +function is a generating series for graph colorings in which vertices may receive any nonempty set of distinct +colors such that neighboring color sets are disjoint. +Our main result lifts a theorem of Gasharov (1996) to this setting, showing that when G is a claw-free +incomparability graph, XG is a positive sum of symmetric Grothendieck functions. This result suggests a +topological interpretation of Gasharov’s theorem. We then show that the Kromatic symmetric functions of +path graphs are not positive in any of several K-analogues of the e-basis of symmetric functions, demon- +strating that the Stanley–Stembridge conjecture (1993) does not have such a lift to K-theory and so is +unlikely to be amenable to a topological perspective. We also define a vertex-weighted extension of XG and +show that it admits a deletion–contraction relation. Finally, we give a K-analogue for XG of the classic +monomial-basis expansion of XG. +1. Introduction +The chromatic symmetric function XG of a graph G was introduced by R. Stanley [Sta95] as a gen- +eralization of G.D. Birkhoff’s chromatic polynomial [Bir12]. While the chromatic polynomial enumerates +proper graph colorings by the number of colors used, XG also records how many times each color is used. +A recent boom of research regarding XG has focused on the Stanley–Stembridge conjecture [SS93], which +proposes (in a reformulation by M. Guay-Paquet [Gua13]) that unit interval graphs have chromatic sym- +metric functions that expand positively in the e-basis of the ring Sym of symmetric functions. In the last +few years, various special cases of this conjecture have been established through direct combinatorial analy- +sis, including the cases of lollipop graphs [DvW18] and many claw-free graphs [HHT19]. Another approach +has been to consider various generalizations of the chromatic symmetric function and corresponding lifts of +the Stanley–Stembridge conjecture. Examples of this latter approach include the chromatic quasisymmetric +function and Shareshian–Wachs conjecture of [SW16] (further studied in [AN21, AS22, CH22, CMP23]), +the chromatic nonsymmetric functions of J. Haglund–A. Wilson [HW20] (further studied in [TWZ22]), and +D. Gebhard–B. Sagan’s [GS01] chromatic symmetric function in noncommuting variables combined with +notions of (e)-positivity and appendable (e)-positivity (further studied in [AWvW21, Dah19, DvW20]). Our +work provides a novel generalization of XG in the same vein. +An important appearance of the ring of symmetric functions Sym is as the cohomology of complex Grass- +mannians (parameter spaces for linear subspaces of a vector space) or more precisely for the classifying +space BU. Here, the Schubert classes derived from a natural cell decomposition of BU are represented by +the Schur function basis sλ of Sym. A richer perspective into the topology of BU is obtained by replacing +cohomology with a generalized cohomology theory. In particular, there has been much focus on studying the +associated combinatorics of the K-theory ring (see [Buc02, MPS21, PY17, TY09]). In this context, many of +the classical objects of symmetric function theory are seen to have interesting K-analogues, often resembling +“superpositions” of classical objects. For example, classical semistandard Young tableaux are replaced by +set-valued tableaux (allowing multiple labels per cell), while Schur functions are replaced by Grothendieck +polynomials sλ (inhomogeneous deformations of sλ). +Date: January 6, 2023. +2020 Mathematics Subject Classification. 05C15, 05C31, 05E05. +Key words and phrases. chromatic symmetric function, Grothendieck polynomial, K-theory, deletion–contraction relation, +Stanley–Stembridge conjecture. +1 + +2 +LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL +Our work introduces a K-analogue of the chromatic symmetric function XG, enumerating colorings of +the graph G that assign a nonempty set of distinct colors to each vertex such that adjacent vertices receive +disjoint sets. While our Kromatic symmetric function XG is new, similar functions have been previously +considered. The first such function was originally discussed by R. Stanley [Sta98] in the context of graph +analogues of symmetric functions, with connections to the real-rootedness of polynomials. Recently, as part +of his effort to refine Schur-positivity results and the Stanley–Stembridge conjecture, B.-H. Hwang [Hwa22] +studied a similar quasisymmetric function for graphs endowed with a fixed map α : V (G) → N that dictates +the size of the set of colors each vertex receives. To connect chromatic quasisymmetric functions of vertex- +weighted graphs to horizontal-strip LLT polynomials, F. Tom [Tom21] has considered a variant for fixed α +with repeated colors allowed. Our work appears to be the first to connect these ideas to the combinatorics +of K-theoretic Schubert calculus. (However, [NS17] is similar in spirit to our work, developing a K-theoretic +analogue of the Postnikov–Shapiro algebra [PS04], an apparently unrelated invariant of graphs). +In this paper, having introduced the Kromatic symmetric function, we begin to develop its combinatorial +theory. We show that the Kromatic symmetric function XG for any graph G expands positively in a K- +theoretic analogue (that we also introduce) of the monomial basis of Sym. In this expansion, the coefficients +enumerate coverings of the graph by (possibly overlapping) stable sets. We further extend the definition of +XG to a vertex-weighted setting, where we give a deletion–contraction relation analogous to that developed +by the first and last authors [CS20] for the vertex-weighted version of XG. +Our main result is that the Kromatic symmetric function of a claw-free incomparability graph expands posi- +tively in the symmetric Grothendieck basis sλ of Sym, lifting to K-theory a celebrated result of V. Gasharov +[Gas96] that such graphs have Schur-positive chromatic symmetric functions. While all known proofs of +Gasharov’s theorem are representation-theoretic or purely combinatorial, the existence of our K-theoretic +analogue suggests that both results likely also have an interpretation in terms of the topology of Grassmanni- +ans. Precisely, for each claw-free incomparability graph G, there should be a subvariety of the Grassmannian +whose cohomology class is represented by XG and whose K-theoretic structure sheaf class is represented by +XG. It would be very interesting to have an explicit construction of such subvarieties. +On the other hand, we show that the Kromatic symmetric functions XPn of path graphs Pn generally +do not expand positively in either of two K-theoretic deformations we propose for the e-basis of Sym. +This fact suggests that the Stanley–Stembridge conjecture, if true, is not naturally interpreted in terms of +the cohomology of Grassmannians and is unlikely to be amenable to such topological tools from Schubert +calculus. We hope these observations can play a similar role to [DFvW20] in limiting the range of potential +avenues of attack on the Stanley–Stembridge conjecture. +This paper is organized as follows. In Section 2, we provide an overview of the background and +notation used from symmetric function theory (Section 2.1), K-theoretic Schubert calculus (Section 2.2), +and graph theory (Section 2.3). In Section 3, we formally introduce the Kromatic symmetric function XG +and give its basic properties, including a formula for the expansion in a new K-analogue of the monomial +basis of Sym and a deletion–contraction relation for a vertex-weighted generalization. We also give our +main theorem that the Kromatic symmetric functions of claw-free incomparability graphs expand positively +in symmetric Grothendieck functions, lifting the main result of [Gas96]. In Section 4, we introduce two +different K-theoretic analogues of the e-basis of Sym and show that the Kromatic symmetric function XP3 +of a 3-vertex path graph P3 is not positive in either analogue, casting doubt on hopes for a Schubert calculus- +based approach to the Stanley–Stembridge conjecture. +2. Background +Throughout this work, N denotes the set of (strictly) positive integers. We write [n] for the set of positive +integers {1, 2, . . ., n}. If S is any set, 2S denotes the power set of all subsets of S. +2.1. Partitions and symmetric functions. In this section, we give a brief overview of necessary back- +ground material necessary. Further details can be found in the textbooks of Stanley [SF99], Manivel [Man01], +and Macdonald [Mac98]. +An integer partition λ = (λ1 ≥ λ2 ≥ · · · ≥ λk) is a finite nonincreasing sequence of positive integers. +We define ℓ(λ) to be the length of the sequence λ (so above, ℓ(λ) = k). We define ri(λ) to be the number of + +THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG +3 +occurrences of i as a part of λ (so, for example, r1(2, 1, 1, 1) = 3). If +ℓ(λ) +� +i=1 +λi = n, +we say that λ is a partition of n, and we write λ ⊢ n. +The Young diagram of shape λ is a set of +squares called cells, left- and top-justified (that is, in “English notation”), such that the ith row from the +top contains λi cells. For example, the Young diagram of shape (2, 2, 1) is +. Let C(λ) denote the set of +cells of the Young diagram of shape λ. If c ∈ C(λ) is a cell of the Young diagram of shape λ, we write c↑ +for the cell immediately above c (assuming it exists), c→ for the cell immediately right of c, and so on. We +write λT for the transpose of λ, the integer partition whose Young diagram is obtained from that of λ by +exchanging rows and columns. +Let SN denote the set of all permutations of the set N fixing all but finitely-many elements. A symmetric +function f ∈ C�x1, x2, . . . , � is a power series of bounded degree such that for each permutation σ ∈ SN, +we have f(x1, x2, . . . ) = f(xσ(1), xσ(2), . . . ). The set Sym ⊂ C�x1, x2, . . .� of symmetric functions forms a +C-vector space. Furthermore, if Λd denotes the set of symmetric functions that are homogeneous of degree +d, then each Symd is a vector space, and +Sym = +∞ +� +d=0 +Symd +as graded vector spaces. +The dimension of Symd as a C-vector space is equal to the number of integer partitions of d, and many bases +of symmetric functions are conveniently indexed by integer partitions. Below we provide some commonly +used bases that will be used in this paper. +Definition 2.1. The following are bases of Sym: +• the monomial symmetric functions {mλ}, defined as +mλ = +� +xλ1 +i1 . . . x +λℓ(λ) +iℓ(λ) , +where the sum ranges over all distinct monomials formed by choosing distinct positive integers +i1, . . . , iℓ(λ); +• the augmented monomial symmetric functions { �mλ}, defined as +�mλ = +� ∞ +� +i=1 +ri(λ)! +� +mλ; +• the elementary symmetric functions {eλ}, defined by +en = +� +i1<··· 1, then ai(j−1) ̸= ∅ and ai(j−1)

1, then a(i−1)j ̸= ∅ and a(i−1)j ≱P aij. +Under the interpretation of A as a partial matrix, this condition means that the partial filling takes the +shape of a Young diagram in English orientation and that moreover the columns of A are nondecreasing (a +“semistandardness” condition). +As noted in [Gas96], each proper α-coloring κ of G corresponds to a P-array Aκ by filling row i of Aκ +with the elements of κ−1(i) in their unique P-increasing order. Thus, for any partition µ, [mµ]X(G,α) is the +number of distinct P-arrays A whose nonempty positions correspond to the Young diagram of the partition +µ, and where for each v ∈ P, the number of entries equal to v is exactly α(v). We say that such P-arrays +have shape µ and content α, and denote the number of such P-arrays by NP (µ, α). Similarly, we write +TP (µ, α) for the number of P-tableaux of shape µ and content α. Thus, we may rewrite Equation (3.5) as +[sλ]XG = +� +π∈Sk +sgn(π) +� +q1,...,qk +�� 0 +q1 +�� +. . . +��k − 1 +qk +�� +� +|α|=n−Q +NP (τ(λ, π, q1, . . . , qk), α) += +� +q1,...,qk +�� 0 +q1 +�� +. . . +��k − 1 +qk +�� +� +|α|=n−Q +� +π∈Sk +sgn(π) NP (τ(λ, π, q1, . . . , qk), α). +As part of the proof of [Gas96, Theorem 3], Gasharov shows that for any partition λ, +� +π∈Sk +sgn(π) NP (τ(λ, π, q1, . . . , qk), α) = TP (τ(λ, idSk, q1, . . . , qk), α), + +THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG +11 +the number of P-tableaux whose shape is the Young diagram with row lengths {λ1 − q1, . . . , λk − qk} and +whose content is α. Thus, +[sλ]XG = +� +q1,...,qk +Q≤n +�� 0 +q1 +�� +. . . +��k − 1 +qk +�� +� +|α|=n−Q +TP (τ(λ, idSk, q1, . . . , qk), α), +which is a nonnegative integer. Since this is true for every partition λ, the Kromatic symmetric function +XG is Grothendieck-positive. +□ +Note that the proof of Theorem 3.6 gives an (effective, but somewhat complicated) formula for the +coefficients of symmetric Grothendieck functions sλ in the expansion of the Kromatic symmetric function +XG for G a claw-free incomparability graph. +It is highly suggestive that Theorem 3.6 (and Gasharov’s Schur-analogue) should have an interpretation +and proof via the topology of Grassmannians. We would be very interested in a solution to the following. +Problem 3.7. For each claw-free incomparability graph G, find a corresponding subvariety VG of the Grass- +mannian such that the cohomology class of VG is represented in Sym by XG and the structure sheaf class of +VG is represented by XG. +4. Analogues of the Stanley–Stembridge conjecture +The previous section shows that Schur-positivity of XG when G is the incomparability graph of a (3+ 1)- +free poset lifts to an analogue for XG. It is natural to ask if it is similarly possible to lift the Stanley– +Stembridge conjecture — claiming that such XG are e-positive — to the context of the Kromatic sym- +metric function. However, it appears that the answer is “no.” +We propose two definitions for a lift of the e-basis to the K-theoretic setting. +On one hand, e-basis +elements in usual symmetric function theory may be defined in terms of fillings of single-column Young +diagrams, so we may lift this formula. +Definition 4.1. The tableau K-elementary symmetric function eλ is given by +en = s1n +and +eλ = eλ1 . . . eλℓ(λ). +On the other hand, we may also define en = 1 +n!XKn, and lift this characterization. +Definition 4.2. The graph K-elementary symmetric function is given by +e′ +n = 1 +n!XKn +and +e′ +λ = e′ +λ1 . . . e′ +λℓ(λ). +It is reasonable to hope (for extending the Stanley–Stembridge conjecture) that XG is positive in one of +these K-theoretic e-bases, whenever G is a claw-free incomparability graph, or even just when G is a unit +interval graph. However, one can compute that XP3 is not positive in either K-theoretic e-basis {eλ} or +{e′ +λ}, dashing any such hopes. +The terms of XP3 that are homogeneous of degree 3 must come from tableau or graph K-elementary +symmetric functions of degree 3, and have coefficients corresponding to e-expansion of XP3. Since XP3 = +3e3 + e21, one sees that the terms of XP3 for |λ| = 3 in the e-basis are 3e3 + e21, and in the e′-basis are +3e′ +3 + e′ +21. However, we now encounter problems with the |λ| = 4 terms. In particular, both e21 and e′ +21 are +supported on the monomial x2 +1x2 +2, with two distinct variables each of degree 2. However, it is easy to check +that there is no proper set coloring of P3 using exactly 1 twice and 2 twice; thus, these monomials must be +cancelled by eµ or e′ +µ terms with strictly negative coefficients. +That this breakdown is so fundamental suggests that it may not be possible to reasonably generalize e- +positivity to the Kromatic symmetric function, in stark contrast with the generalization of Schur-positivity +given in Theorem 3.6. This suggests that the Stanley–Stembridge is not amenable to a topological interpre- +tation along the lines of Problem 3.7. + +12 +LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL +Acknowledgements +We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), +[funding reference numbers RGPIN-2020-03912, RGPIN-2021-00010 and, RGPIN-2022-03093]. +Cette recherche a ´et´e financ´ee par le Conseil de recherches en sciences naturelles et en g´enie du Canada +(CRSNG), [num´eros de r´ef´erence RGPIN-2020-03912, RGPIN-2021-00010, et RGPIN-2022-03093]. +This project was funded in part by the Government of Ontario. +References +[AN21] +Alex Abreu and Antonio Nigro, Chromatic symmetric functions from the modular law, Journal of Combinatorial +Theory, Series A 180 (2021), 105407. +[AS22] +Per Alexandersson and Robin Sulzgruber, A combinatorial expansion of vertical-strip LLT polynomials in the basis +of elementary symmetric functions, Advances in Mathematics 400 (2022), 108256. +[AWvW21] Farid Aliniaeifard, Victor Wang, and Stephanie van Willigenburg, The chromatic symmetric function of a graph +centred at a vertex, preprint (2021), arXiv:2108.04850. +[Bir12] +George D Birkhoff, A determinant formula for the number of ways of coloring a map, Annals of Mathematics 14 +(1912), no. 1/4, 42–46. +[Buc02] +Anders Skovsted Buch, A Littlewood-Richardson rule for the K-theory of Grassmannians, Acta Mathematica 189 +(2002), no. 1, 37–78. +[CH22] +Soojin Cho and Jaehyun Hong, Positivity of chromatic symmetric functions associated with Hessenberg functions +of bounce number 3, Electronic Journal of Combinatorics (2022), P2–19. +[CMP23] +Laura Colmenarejo, Alejandro H. Morales, and Greta Panova, Chromatic symmetric functions of Dyck paths and +q-rook theory, European Journal of Combinatorics 107 (2023), Paper No. 103595, 36 pages. +[CS20] +Logan Crew and Sophie Spirkl, A deletion–contraction relation for the chromatic symmetric function, European +Journal of Combinatorics 89 (2020), 103143. +[Dah19] +Samantha Dahlberg, A new formula for Stanley’s chromatic symmetric function for unit interval graphs and e- +positivity for triangular ladder graphs, S´eminaire Lotharingien de Combinatoire 82 (2019). +[DFvW20] +Samantha Dahlberg, Ang`ele Foley, and Stephanie van Willigenburg, Resolving Stanley’s e-positivity of claw- +contractible-free graphs, Journal of the European Mathematical Society (JEMS) 22 (2020), no. 8, 2673–2696. +[Die17] +Reinhard Diestel, Graph theory, fifth ed., Graduate Texts in Mathematics, vol. 173, Springer, Berlin, 2017. +[DvW18] +Samantha Dahlberg and Stephanie van Willigenburg, Lollipop and lariat symmetric functions, SIAM Journal on +Discrete Mathematics 32 (2018), no. 2, 1029–1039. +[DvW20] +, Chromatic symmetric functions in noncommuting variables revisited, Advances in Applied Mathematics +112 (2020), 101942. +[Gas96] +Vesselin Gasharov, Incomparability graphs of (3 + 1)-free posets are s-positive, Discrete Mathematics 157 (1996), +no. 1-3, 193–197. +[GS01] +David D Gebhard and Bruce E Sagan, A chromatic symmetric function in noncommuting variables, Journal of +Algebraic Combinatorics 13 (2001), no. 3, 227–255. +[Gua13] +Mathieu Guay-Paquet, A modular relation for the chromatic symmetric functions of (3 + 1)-free posets, preprint +(2013), arXiv:1306.2400. +[HHT19] +Ang`ele M Hamel, Ch´ınh T Ho`ang, and Jake E Tuero, Chromatic symmetric functions and H-free graphs, Graphs +and Combinatorics 35 (2019), no. 4, 815–825. +[HW20] +James Haglund and Andrew Timothy Wilson, Macdonald polynomials and chromatic quasisymmetric functions, +Electronic Journal of Combinatorics 27 (2020), no. 3, Paper No. 3.37, 21 pages. +[Hwa22] +Byung-Hak Hwang, Chromatic quasisymmetric functions and noncommutative P -symmetric functions, preprint +(2022), arXiv:2208.09857. +[Iwa20] +Shinsuke Iwao, Grothendieck polynomials and the boson-fermion correspondence, Algebraic Combinatorics 3 (2020), +no. 5, 1023–1040. +[LN14] +Alain Lascoux and Hiroshi Naruse, Finite sum Cauchy identity for dual Grothendieck polynomials, Japan Academy. +Proceedings. Series A. Mathematical Sciences 90 (2014), no. 7, 87–91. +[LP07] +Thomas Lam and Pavlo Pylyavskyy, Combinatorial Hopf algebras and K-homology of Grassmannians, International +Mathematics Research Notices. IMRN (2007), no. 24, Art. ID rnm125, 48 pages. +[Mac98] +Ian G Macdonald, Symmetric functions and Hall polynomials, Oxford University Press, 1998. +[Man01] +Laurent Manivel, Symmetric functions, Schubert polynomials and degeneracy loci, SMF/AMS Texts and Mono- +graphs, vol. 6, American Mathematical Society, Providence, RI and Soci´et´e Math´ematique de France, Paris, 2001, +Translated from the 1998 French original by John R. Swallow, Cours Sp´ecialis´es, 3. +[MPS21] +Cara Monical, Oliver Pechenik, and Dominic Searles, Polynomials from combinatorial K-theory, Canadian Journal +of Mathematics 73 (2021), no. 1, 29–62. +[NS17] +Gleb Nenashev and Boris Shapiro, “K-theoretic” analog of Postnikov-Shapiro algebra distinguishes graphs, Journal +of Combinatorial Theory. Series A 148 (2017), 316–332. +[PS04] +Alexander Postnikov and Boris Shapiro, Trees, parking functions, syzygies, and deformations of monomial ideals, +Transactions of the American Mathematical Society 356 (2004), no. 8, 3109–3142. + +THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG +13 +[PY17] +Oliver Pechenik and Alexander Yong, Genomic tableaux, Journal of Algebraic Combinatorics 45 (2017), no. 3, +649–685. +[SF99] +Richard P. Stanley and S. Fomin, Enumerative combinatorics. vol. 2, volume 62 of, Cambridge Studies in Advanced +Mathematics (1999). +[SS93] +Richard P. Stanley and John R. Stembridge, On immanants of Jacobi-Trudi matrices and permutations with +restricted position, Journal of Combinatorial Theory. Series A 62 (1993), no. 2, 261–279. +[Sta95] +Richard P. Stanley, A symmetric function generalization of the chromatic polynomial of a graph, Advances in +Mathematics 111 (1995), no. 1, 166–194. +[Sta98] +, Graph colorings and related symmetric functions: ideas and applications a description of results, interest- +ing applications, & notable open problems, Discrete Mathematics 193 (1998), no. 1-3, 267–286. +[SW16] +John Shareshian and Michelle L Wachs, Chromatic quasisymmetric functions, Advances in Mathematics 295 (2016), +497–551. +[Tom21] +Foster Tom, Private communication to L. Crew and S. Spirkl, 2021. +[TWZ22] +Vasu Tewari, Andrew Timothy Wilson, and Philip B. Zhang, Chromatic nonsymmetric polynomials of Dyck graphs +are slide-positive, Proceedings of the American Mathematical Society 150 (2022), no. 5, 1873–1888. +[TY09] +Hugh Thomas and Alexander Yong, A jeu de taquin theory for increasing tableaux, with applications to K-theoretic +Schubert calculus, Algebra & Number Theory 3 (2009), no. 2, 121–148. +[Wes21] +Douglas B. West, Combinatorial mathematics, Cambridge University Press, Cambridge, 2021. +Department of Combinatorics & Optimization, University of Waterloo, Waterloo, ON, N2L 3G1, Canada. +Email address: {lcrew, opecheni, sspirkl}@uwaterloo.ca + diff --git a/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0fc0fa75c2a8f2ced9ce0fef90538948a7097ce5 --- /dev/null +++ b/_tA0T4oBgHgl3EQfPf_J/content/tmp_files/load_file.txt @@ -0,0 +1,622 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf,len=621 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='02177v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='CO] 5 Jan 2023 THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Schur functions are a basis of the symmetric function ring that represent Schubert cohomology classes for Grassmannians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Replacing the cohomology ring with K-theory yields a rich combinatorial theory of inhomogeneous deformations, where Schur functions are replaced by their K-analogues, the basis of symmetric Grothendieck functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We introduce and initiate a theory of the Kromatic symmetric function XG, a K-theoretic analogue of the chromatic symmetric function XG of a graph G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The Kromatic symmetric function is a generating series for graph colorings in which vertices may receive any nonempty set of distinct colors such that neighboring color sets are disjoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Our main result lifts a theorem of Gasharov (1996) to this setting, showing that when G is a claw-free incomparability graph, XG is a positive sum of symmetric Grothendieck functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' This result suggests a topological interpretation of Gasharov’s theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We then show that the Kromatic symmetric functions of path graphs are not positive in any of several K-analogues of the e-basis of symmetric functions, demon- strating that the Stanley–Stembridge conjecture (1993) does not have such a lift to K-theory and so is unlikely to be amenable to a topological perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We also define a vertex-weighted extension of XG and show that it admits a deletion–contraction relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Finally, we give a K-analogue for XG of the classic monomial-basis expansion of XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Introduction The chromatic symmetric function XG of a graph G was introduced by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Stanley [Sta95] as a gen- eralization of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Birkhoff’s chromatic polynomial [Bir12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' While the chromatic polynomial enumerates proper graph colorings by the number of colors used, XG also records how many times each color is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' A recent boom of research regarding XG has focused on the Stanley–Stembridge conjecture [SS93], which proposes (in a reformulation by M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Guay-Paquet [Gua13]) that unit interval graphs have chromatic sym- metric functions that expand positively in the e-basis of the ring Sym of symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In the last few years, various special cases of this conjecture have been established through direct combinatorial analy- sis, including the cases of lollipop graphs [DvW18] and many claw-free graphs [HHT19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Another approach has been to consider various generalizations of the chromatic symmetric function and corresponding lifts of the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Examples of this latter approach include the chromatic quasisymmetric function and Shareshian–Wachs conjecture of [SW16] (further studied in [AN21, AS22, CH22, CMP23]), the chromatic nonsymmetric functions of J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Haglund–A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Wilson [HW20] (further studied in [TWZ22]), and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Gebhard–B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Sagan’s [GS01] chromatic symmetric function in noncommuting variables combined with notions of (e)-positivity and appendable (e)-positivity (further studied in [AWvW21, Dah19, DvW20]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Our work provides a novel generalization of XG in the same vein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' An important appearance of the ring of symmetric functions Sym is as the cohomology of complex Grass- mannians (parameter spaces for linear subspaces of a vector space) or more precisely for the classifying space BU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Here, the Schubert classes derived from a natural cell decomposition of BU are represented by the Schur function basis sλ of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' A richer perspective into the topology of BU is obtained by replacing cohomology with a generalized cohomology theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In particular, there has been much focus on studying the associated combinatorics of the K-theory ring (see [Buc02, MPS21, PY17, TY09]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In this context, many of the classical objects of symmetric function theory are seen to have interesting K-analogues, often resembling “superpositions” of classical objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' For example, classical semistandard Young tableaux are replaced by set-valued tableaux (allowing multiple labels per cell), while Schur functions are replaced by Grothendieck polynomials sλ (inhomogeneous deformations of sλ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Date: January 6, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 2020 Mathematics Subject Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 05C15, 05C31, 05E05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Key words and phrases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' chromatic symmetric function, Grothendieck polynomial, K-theory, deletion–contraction relation, Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 1 2 LOGAN CREW, OLIVER PECHENIK, AND SOPHIE SPIRKL Our work introduces a K-analogue of the chromatic symmetric function XG, enumerating colorings of the graph G that assign a nonempty set of distinct colors to each vertex such that adjacent vertices receive disjoint sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' While our Kromatic symmetric function XG is new, similar functions have been previously considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The first such function was originally discussed by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Stanley [Sta98] in the context of graph analogues of symmetric functions, with connections to the real-rootedness of polynomials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Recently, as part of his effort to refine Schur-positivity results and the Stanley–Stembridge conjecture, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Hwang [Hwa22] studied a similar quasisymmetric function for graphs endowed with a fixed map α : V (G) → N that dictates the size of the set of colors each vertex receives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' To connect chromatic quasisymmetric functions of vertex- weighted graphs to horizontal-strip LLT polynomials, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Tom [Tom21] has considered a variant for fixed α with repeated colors allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Our work appears to be the first to connect these ideas to the combinatorics of K-theoretic Schubert calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' (However, [NS17] is similar in spirit to our work, developing a K-theoretic analogue of the Postnikov–Shapiro algebra [PS04], an apparently unrelated invariant of graphs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In this paper, having introduced the Kromatic symmetric function, we begin to develop its combinatorial theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We show that the Kromatic symmetric function XG for any graph G expands positively in a K- theoretic analogue (that we also introduce) of the monomial basis of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In this expansion, the coefficients enumerate coverings of the graph by (possibly overlapping) stable sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We further extend the definition of XG to a vertex-weighted setting, where we give a deletion–contraction relation analogous to that developed by the first and last authors [CS20] for the vertex-weighted version of XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Our main result is that the Kromatic symmetric function of a claw-free incomparability graph expands posi- tively in the symmetric Grothendieck basis sλ of Sym, lifting to K-theory a celebrated result of V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Gasharov [Gas96] that such graphs have Schur-positive chromatic symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' While all known proofs of Gasharov’s theorem are representation-theoretic or purely combinatorial, the existence of our K-theoretic analogue suggests that both results likely also have an interpretation in terms of the topology of Grassmanni- ans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Precisely, for each claw-free incomparability graph G, there should be a subvariety of the Grassmannian whose cohomology class is represented by XG and whose K-theoretic structure sheaf class is represented by XG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' It would be very interesting to have an explicit construction of such subvarieties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' On the other hand, we show that the Kromatic symmetric functions XPn of path graphs Pn generally do not expand positively in either of two K-theoretic deformations we propose for the e-basis of Sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' This fact suggests that the Stanley–Stembridge conjecture, if true, is not naturally interpreted in terms of the cohomology of Grassmannians and is unlikely to be amenable to such topological tools from Schubert calculus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We hope these observations can play a similar role to [DFvW20] in limiting the range of potential avenues of attack on the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' This paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In Section 2, we provide an overview of the background and notation used from symmetric function theory (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='1), K-theoretic Schubert calculus (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='2), and graph theory (Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In Section 3, we formally introduce the Kromatic symmetric function XG and give its basic properties, including a formula for the expansion in a new K-analogue of the monomial basis of Sym and a deletion–contraction relation for a vertex-weighted generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We also give our main theorem that the Kromatic symmetric functions of claw-free incomparability graphs expand positively in symmetric Grothendieck functions, lifting the main result of [Gas96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In Section 4, we introduce two different K-theoretic analogues of the e-basis of Sym and show that the Kromatic symmetric function XP3 of a 3-vertex path graph P3 is not positive in either analogue, casting doubt on hopes for a Schubert calculus- based approach to the Stanley–Stembridge conjecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Background Throughout this work, N denotes the set of (strictly) positive integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We write [n] for the set of positive integers {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=', n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' If S is any set, 2S denotes the power set of all subsets of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Partitions and symmetric functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' In this section, we give a brief overview of necessary back- ground material necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Further details can be found in the textbooks of Stanley [SF99], Manivel [Man01], and Macdonald [Mac98].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' An integer partition λ = (λ1 ≥ λ2 ≥ · · · ≥ λk) is a finite nonincreasing sequence of positive integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We define ℓ(λ) to be the length of the sequence λ (so above, ℓ(λ) = k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We define ri(λ) to be the number of THE KROMATIC SYMMETRIC FUNCTION: A K-THEORETIC ANALOGUE OF XG 3 occurrences of i as a part of λ (so, for example, r1(2, 1, 1, 1) = 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' If ℓ(λ) � i=1 λi = n, we say that λ is a partition of n, and we write λ ⊢ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The Young diagram of shape λ is a set of squares called cells, left- and top-justified (that is, in “English notation”), such that the ith row from the top contains λi cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' For example, the Young diagram of shape (2, 2, 1) is .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Let C(λ) denote the set of cells of the Young diagram of shape λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' If c ∈ C(λ) is a cell of the Young diagram of shape λ, we write c↑ for the cell immediately above c (assuming it exists), c→ for the cell immediately right of c, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' We write λT for the transpose of λ, the integer partition whose Young diagram is obtained from that of λ by exchanging rows and columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Let SN denote the set of all permutations of the set N fixing all but finitely-many elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' A symmetric function f ∈ C�x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' , � is a power series of bounded degree such that for each permutation σ ∈ SN, we have f(x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' ) = f(xσ(1), xσ(2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The set Sym ⊂ C�x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='� of symmetric functions forms a C-vector space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Furthermore, if Λd denotes the set of symmetric functions that are homogeneous of degree d, then each Symd is a vector space, and Sym = ∞ � d=0 Symd as graded vector spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The dimension of Symd as a C-vector space is equal to the number of integer partitions of d, and many bases of symmetric functions are conveniently indexed by integer partitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Below we provide some commonly used bases that will be used in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' The following are bases of Sym: the monomial symmetric functions {mλ}, defined as mλ = � xλ1 i1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' x λℓ(λ) iℓ(λ) , where the sum ranges over all distinct monomials formed by choosing distinct positive integers i1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' , iℓ(λ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' the augmented monomial symmetric functions { �mλ}, defined as �mλ = � ∞ � i=1 ri(λ)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' � mλ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf'} +page_content=' the elementary symmetric functions {eλ}, defined by en = � i1<··· 1 +Table 1: Summary of the types of NAS search spaces. +• Layer is often used in chain-structured or macro search spaces to denote the same thing +as an operation or primitive. However, it sometimes refers to well-known combinations +of operations, such as the inverted bottleneck residual (Cai et al., 2019; Sandler +et al., 2018; Tan and Le, 2019; Tan et al., 2019). +• Block/Module is sometimes used to denote a sequential stack of layers following the +notation used in most chain-structured and macro search spaces (Cai et al., 2020; Tan +and Le, 2019; Tan et al., 2019). +• Cell is used to denote a directed acyclic graph of operations in cell-based search spaces. +The maximum number of operations in a cell is often fixed. +• Motif is used to denote a sub-pattern formed from multiple operations in an architecture. +Some literature refers to a cell as a higher-level motif and a smaller set of operations as +a base-level motif. +2.2 Macro Search Spaces +In the NAS literature, macro search spaces may refer to one of two types. First, they may +refer to search spaces which encode the entire architecture in one level (as opposed to cell- +based or hierarchical search spaces), which were popular in 2017 and 2018. Second, they +may refer to search spaces which focus only on macro-level hyperparameters. +For the former, an entire architecture is represented as a single directed acyclic graph +(Baker et al., 2017; Kandasamy et al., 2018; Real et al., 2017; Zoph and Le, 2017). These +search spaces typically have a choice of operation at each node in the graph, as well as the +choice of DAG topology. For example, the NASBOT CNN search space (Kandasamy et al., +2018) consists of choices of different convolution, pooling, and fully connected layers, with +any DAG topology, with depth of at most 25. +The second type of macro search spaces (Dong et al., 2021b; Duan et al., 2021; Tan and +Le, 2019), focus on the variation of macro-level hyperparameters, such as where and how +much to downsample the spatial resolution throughout the architecture, while keeping the +6 + +Neural Architecture Search: Insights from 1000 Papers +architecture topology and operations fixed.2 For example, Tan and Le (2019) propose a +CNN search space by varying the network depth, width, and input feature resolution. +Compared to other search spaces, macro search spaces have high representation power: +their flexible structure allows the possibility of discovering novel architectures. However, +their main downside is that they are very slow to search. In the next two sections, we +discuss types of search spaces which have more rigidity, making them faster to search. +2.3 Chain-Structured Search Spaces +Chain-structured search spaces, as the name suggests, have a simple architecture topology: +a sequential chain of operation layers. They often take state-of-the-art manual designs, such +as ResNet (He et al., 2016b) or MobileNets (Howard et al., 2017), as the backbone. +There are several chain-structured search spaces based on convolutional networks. Prox- +ylessNAS (Cai et al., 2019) starts with the MobileNetV2 (Sandler et al., 2018) architecture +and searches over the kernel sizes and expansion ratios in the inverted bottleneck residual +layers. XD (Roberts et al., 2021) and DASH (Shen et al., 2022) start with a LeNet (LeCun +et al., 1999), ResNet (He et al., 2016a), or WideResNet (Zagoruyko and Komodakis, 2016), +and search over an expressive generalization of convolutions based on Kaleidoscope matrices +(Dao et al., 2020), or kernel sizes and dilations, respectively. +Chain-structured search spaces are also popular in transformer-based search spaces. +For example, the search space from Lightweight Transformer Search (LTS) (Javaheripi +et al., 2022) consists of a chain-structured configuration of the popular GPT family of +architectures (Brown et al., 2020; Radford et al., 2019) for autoregressive language modeling, +with searchable choices for the number of layers, model dimension, adaptive embedding +dimension, dimension of the feedforward neural network in a transformer layer, and number +of heads in each transformer layer. The search spaces from NAS-BERT (Xu et al., 2021a) +and MAGIC (Xu et al., 2022) both consist of a chain-structured search space over the BERT +architecture (Devlin et al., 2019) with up to 26 operation choices consisting of variants of +multi-head attention, feedforward layers, and convolutions with different kernel sizes. +Chain-structured search spaces are conceptually simple, making them easy to design +and implement. They also often contain strong architectures that can be found relatively +quickly. Their main downside is that, due to the simple architecture topology, there is a +comparatively lower chance of discovering a truly novel architecture. +2.4 Cell-based Search Spaces +The cell-based search space is perhaps the most popular type of search space in NAS. It is +inspired by the fact that state-of-the-art human-designed CNNs often consist of repeated +patterns, for example, residual blocks in ResNets (Zoph et al., 2018). +Thus, instead of +searching for the entire network architecture from scratch, Zoph et al. (2018) proposed to +only search over relatively small cells, and stack the cells several times in sequence to form +the overall architecture. Formally, the searchable cells make up the micro structure of the +search space, while the outer skeleton (the macro structure) is fixed. +2. Strictly speaking, since these search spaces have a fixed architecture topology, they may also be called +hyperparameter tuning search spaces instead of NAS search spaces. +7 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +NASNet Cell +(0perations on Nodes) +Concatenate +hi +hi-1 +add +Op +Op +add +Op +Op +add +Op +Op +… +hi+1 +DARTS Cell +(0perations on Edges) +hi +hi-1 +hi+1 +0 +1 +2 +3 +Operation +candidates +Architecture +Input +Normal Cell +Output +Reduction Cell +Normal Cell +Reduction Cell +Normal Cell +x N +x N +x N +Figure 3: Illustration of cell-based search spaces. The outer skeleton across cells (left) is +fixed, while the cells are searchable. NASNet assigns operations to nodes (middle) +while DARTS assigns operations to edges (right). +The first modern cell-based search space, NASNet, was proposed by Zoph et al. (2018). +It comprises of two types of cells: the normal cell and the reduction cell. Both types have +the same structure, but the initial operations in the reduction cell have a stride of two to +halve the input spatial resolution. Each NASNet cell can be represented as a DAG with +seventeen non-input nodes (see Figure 3 (middle)). The nodes are arranged in triples of +two operation nodes (such as convolution and pooling operations) and a combination node +(such as addition or concatenation). The final NASNet architecture is formed by stacking +multiple normal and reduction cells in sequence (see Figure 3 (left)). Overall, there are 1035 +unique architectures in the NASNet search space. +Since the NASNet search space, many other cell search spaces have been proposed, all +of which share a high-level similarity to NASNet, with the main differences being the fixed +macro structure, the layout and constraints in the cells, and the choices of operations within +the cells. Two of the most popular cell-based search spaces are NAS-Bench-101 (Ying et al., +2019) and the DARTS search space (Liu et al., 2019c). NAS-Bench-101 is the first tabular +benchmark for NAS (discussed in Section 8), and its cells consist of seven nodes, each with +three choices of operations; it contains 423 624 unique architectures. The DARTS search +space differs more fundamentally: while it also has two searchable cells, the DARTS cells +have operation choices on the edges of the graph rather than on the nodes. In the DARTS +cell, the nodes represent latent representations and the edges are operations, whereas in the +NASNet cell, the latent representations are on the edges and the nodes are operations. The +DARTS cells (see Figure 3 (right)) contain eight edges, each of which have eight choices of +operations. Overall, the DARTS space contains a total of 1018 unique architectures. +8 + +Neural Architecture Search: Insights from 1000 Papers +Besides image classification, similar cell designs have also been adopted for language +models. For example, NAS-Bench-ASR (Mehrotra et al., 2021) provides a search space of +convolutional speech model cells for automatic speech recognition, and there are several +LSTM-based search spaces (Klyuchnikov et al., 2022; Liu et al., 2019c; Pham et al., 2018). +The cell-based design significantly reduces the complexity of search spaces, while often +resulting in a high-performing final architecture. This has led to the cell-based search spaces +being the most popular type of search space in recent years. Furthermore, by detaching the +depth of an architecture from the search, the cell-based structure is transferable: the optimal +cells learned on a small dataset (e.g., CIFAR-10) typically transfer well to a large dataset +(e.g., ImageNet) by increasing the number of cells and filters in the overall architecture (Liu +et al., 2019c; Zoph et al., 2018). +Despite their popularity, cell-based search spaces face some criticisms. First, while the +DARTS search space contains a seemingly large number of 1018 architectures, the variance +in the performance of DARTS architectures is rather small (Wan et al., 2022b; Yang et al., +2020). This small variance may contribute to the fact that sophisticated search strategies +can only give marginal gains over the average performance of randomly sampled archi- +tectures (Yang et al., 2020). Moreover, there are many ad-hoc design choices and fixed +hyperparameters that come with cell-based search spaces whose impact is unclear (Wan +et al., 2022b), such as the separation of normal and reduction cells, number of nodes, and +set of operations. Finally, although limiting the search to a cell significantly reduces the +search complexity, this practice reduces the expressiveness of the NAS search space, making +it difficult to find highly novel architectures with cell search spaces. In light of this, some +recent work advocates for searching for macro connections among cells in addition to the +micro cell structure. We discuss this in more detail in the next section. +2.5 Hierarchical Search Spaces +Up to this point, all search spaces described have had a flat representation, in which an +architecture is built by defining its hyperparameters, topology, and operation primitives in +a single design level. Specifically, only one level of topology is searched, whether at the cell +level or architecture level. On the other hand, hierarchical search spaces involve designing +motifs at different levels, where each higher-level motif is often represented as a DAG of +lower-level motifs (Chrostoforidis et al., 2021; Liu et al., 2018b; Ru et al., 2020b). +A simple class of hierarchical search spaces has two searchable levels by adding macro- +level architecture hyperparameters to cell or chain-structured search spaces. For example, +the MnasNet search space (Tan et al., 2019) uses MobileNetV2 as the backbone. Liu et al. +(2019b) designed a two-level search space for semantic image segmentation, and follow-up +work extended it to image denoising (Zhang et al., 2020a) and stereo matching (Kumari and +Kaur, 2016). Finally, Chen et al. (2021a) propose a two-level transformer-based search space +for vision tasks inspired by ViT (Dosovitskiy et al., 2021) and DeiT (Touvron et al., 2021). +The search space consists of a number of sequential blocks which can be a combination of +local (convolution) or global (self-attention) layers. +Beyond two levels, Liu et al. (2018b) and Wu et al. (2021) propose hierarchies of three +levels. Liu et al. (2018b) propose a three-level hierarachy, where each level is a graph made +up of components from the previous level (see Figure 4). Wu et al. (2021) propose a different +9 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +three-level hierarchy, consisting of kernel hyperparameters, cell-based hyperparameters, and +macro hyperparameters. The former design is extended beyond three levels in two follow-up +works: Ru et al. (2020b) proposed a hierarchical design of four levels, controlled by a set +of hyperparameters corresponding to a random graph generator, and Chrostoforidis et al. +(2021) introduced a recursive building process to permit a varying number of hierarchical +levels as well as a flexible topology among top-level motifs. +Hierarchical Search Space +0 +1 +3 +2 +1 +2 +1 +2 +3x3 +convolution +0 +1 +3 +2 +0 +1 +3 +2 +Level 3 Motif +Level 2 Motif +Level 1 +Operation Primitives +Level 3 Motif Graph Unrolled +Figure 4: Illustration of hierarchical representation proposed +in Liu et al. (2018b). Level 1 of the hierarchy con- +sists of choices of operation primitives. Level 2 con- +sists of selecting the topology across small sets of +operation primitives. Level 3 consists of selecting +the topology across the constructions from level 2. +There are multiple ben- +efits to using hierarchical +search spaces. +First, hier- +archical search spaces tend +to be more expressive. Most +chain-structured, cell-based, +and macro search spaces can +be seen as a hierarchical +search space with a single +searchable level, but having +two or more levels allows +us to search over more di- +verse and complex architec- +ture designs. +Furthermore, +a hierarchical representation +of a large architecture is +an effective way to reduce +the search complexity, which +can lead to better search effi- +ciency (Chrostoforidis et al., +2021; Liu et al., 2018b; Ru +et al., 2020b). On the other hand, hierarchical search spaces can be more challenging to +implement and search through. +2.6 Architecture Encodings +Throughout this section, we have discussed a wide variety of NAS search spaces. As a +segue into the next two sections focusing on search strategies, we note that many NAS +algorithms and subroutines need to have a succinct representation of each architecture, or +encoding, in order to perform operations such as mutating an architecture, quantifying the +similarity between two architectures, or predicting the test performance of an architecture. +This makes architecture encodings important for several areas of NAS, including discrete +NAS algorithms (Section 3) and performance prediction (Section 5.1). +In most search spaces, the architecture can be represented compactly as a directed acyclic +graph (DAG), where each node or edge represents an operation. For example, architectures +in cell-based search spaces and chain-structured search spaces can be represented in this +way. However, hierarchical search spaces cannot be represented fully using a DAG, and +often need a conditionally-structured encoding, where the number of levels of conditional +hyperparameters correspond to the number of levels of the hierarchy. +10 + +Neural Architecture Search: Insights from 1000 Papers +For cell-based search spaces, one of the most commonly-used encodings is the adjacency +matrix along with a list of operations, of the searchable cell(s) (Ying et al., 2019; Zoph and +Le, 2017). In order to have better generalizablility, Ning et al. (2020) proposed a graph- +based encoding scheme and White et al. (2021a) proposed a path-based encoding scheme, +both of which model the flow of propagating information in the network. Finally, another +type of encoding for all search spaces is a learned encoding using unsupervised pre-training. +In this technique, before we run NAS, we use a set of untrained architectures to learn an +architecture encoding, for example, by using an autoencoder (Li et al., 2020b; Lukasik et al., +2021, 2022; Yan et al., 2020; Zhang et al., 2019) or a transformer (Yan et al., 2021a). +When choosing an architecture encoding, scalability and generalizability are important +traits. Recent work has shown that different NAS subroutines, such as sampling a random +architecture, perturbing an architecture, or training a surrogate model, may each perform +best with different encodings (White et al., 2020). Furthermore, even small changes to the +architecture encoding scheme can have significant effects on the performance of NAS (White +et al., 2020; Ying et al., 2019). +3. Black-Box Optimization Techniques +Now that we have covered search spaces, we move to perhaps the most widely-studied com- +ponent of NAS: the search strategy. This is what we run to find an optimal architecture +from the search space. Search strategies generally fall into two categories: black-box op- +timization techniques and one-shot techniques. However, some methods that we discuss +include characteristics of both, or neither, of these categories. We first discuss black-box +optimization techniques in this section, followed by one-shot techniques in Section 4. +For black-box optimization, we discuss baselines (Section 3.1), reinforcement learning +(Section 3.2), evolution (Section 3.3), Bayesian optimization (Section 3.4), and Monte-Carlo +tree search (Section 3.5). Black-box optimization techniques are widely used and studied +today, due to their strong performance and ease of use. In general, black-box optimization +techniques tend to use more computational resources than one-shot techniques, due to +training many architectures independently (without sharing weights across architectures like +one-shot techniques). However, they also have many advantages over one-shot techniques, +such as robustness (and the lack of catastrophic failure modes), simpler optimization of non- +differentiable objectives, simpler parallelism, joint optimization with other hyperparameters, +and easier adaptation to, e.g., new problems, datasets or search spaces. They are also often +conceptually simpler, making them easier to implement and use. +3.1 Baselines +One of the simplest possible baselines for NAS is random search: architectures are selected +randomly from the search space and then fully trained. In the end, the architecture with +the best validation accuracy is outputted. Despite its na¨ıvet´e, multiple papers have shown +that random search performs surprisingly well (Chen et al., 2018; Li and Talwalkar, 2019; +Sciuto et al., 2020; Yang et al., 2020). This is especially true for highly engineered search +spaces with a high fraction of strong architectures, since random search with a budget +of k evaluations will, in expectation, find architectures in the top 100/k% of the search +space. However, other works show that random search does not perform well on large, +11 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Algorithm 1 General Reinforcement Learning NAS Algorithm +Input: Search space A, number of iterations T. +Randomly initialize weights θ of the controller architecture. +for t = 1, . . . , T do +Train architecture a ∼ π(a; θ), randomly sampled from the controller policy π(a; θ). +Update controller parameters θ by performing a gradient update ∇θEa∼π(a;θ)[Lval(a)]. +end for +Output: Architecture selected from the trained policy π(a; θ∗) +diverse search spaces (Bender et al., 2020; Real et al., 2020). Still, random search is highly +recommended as a baseline comparison for new NAS algorithms (Lindauer and Hutter, 2020; +Yang et al., 2020), and can be made highly competitive by incorporating weight sharing +(Li and Talwalkar, 2019), zero-cost proxies (Abdelfattah et al., 2021), or learning curve +extrapolation (Yan et al., 2021b). Multiple papers (Sciuto et al., 2020; Yang et al., 2020) +have also proposed a related, simpler baseline: random sampling, the average performance +of architectures across the entire search space. +In addition to random search, recent papers showed that local search is a strong baseline +for NAS on both small (Ottelander et al., 2021; White et al., 2021b) and large (Siems et al., +2020) search spaces. This is true even for the simplest form of local search: iteratively +train and evaluate all of the neighbors of the best architecture found so far, where the +neighborhood is typically defined as all architectures which differ by one operation or edge. +Local search can be sped up substantially by using network morphisms to warm-start the +optimization of neighboring architectures (Elsken et al., 2017). +3.2 Reinforcement Learning +Reinforcement learning (RL) was very prominent in the early days of modern NAS. Notably, +the seminal work by Zoph and Le (2017) used RL on 800 GPUs for two weeks to obtain +competitive performance on CIFAR-10 and Penn Treebank; this finding received substantial +media attention and started the modern resurgence of NAS. This was followed up by several +more reinforcement learning approaches (Pham et al., 2018; Zoph et al., 2018). +Most reinforcement learning approaches model the architectures as a sequence of actions +generated by a controller (Baker et al., 2017; Zoph and Le, 2017). The validation accuracy +of the sampled architectures after training is used as a reward signal to update the con- +troller in order to maximize its expected value. See Algorithm 1. The controller is usually +a recurrent neural network (RNN) (Zoph and Le, 2017; Zoph et al., 2018) that outputs a +sequence of components corresponding to an architecture. After each outputted architec- +ture is trained and evaluated, the RNN parameters are updated to maximize the expected +validation accuracy of outputted architectures, using REINFORCE (Williams, 1992; Zoph +and Le, 2017) or proximal policy optimization (Schulman et al., 2017; Zoph et al., 2018). +ENAS (Pham et al., 2018) follows a similar strategy but speeds up the reward estimation +using weight sharing; we will discuss this in detail in Section 4. +More recently, RL has not been used prominently for NAS, since it has been shown to +be outperformed in head-to-head comparisons by evolutionary methods (Real et al., 2019) +and Bayesian optimization (Ying et al., 2019), which we will discuss next. +12 + +Neural Architecture Search: Insights from 1000 Papers +Algorithm 2 General Evolutionary NAS Algorithm +Input: Search space A, number of iterations T. +Randomly sample and train a population of architectures from the search space A. +for t = 1, . . . , T do +Sample (based on accuracy) a set of parent architectures from the population. +Mutate the parent architectures to generate children architectures, and train them. +Add the children to the population, and kill off the architectures that are the oldest +(or have the lowest accuracy) among the current population. +end for +Output: Architecture from the population with the highest validation accuracy. +3.3 Evolutionary and Genetic Algorithms +Decades before the recent NAS resurgence, one of the first works in NAS used an evolution- +ary algorithm (Miller et al., 1989). In other early works, it was common to use evolutionary +algorithms to simultaneously optimize the neural architecture and its weights (Angeline +et al., 1994; Floreano et al., 2008; Stanley and Miikkulainen, 2002; Stanley et al., 2009). +Today, evolutionary algorithms are still popular for the optimization of architectures due to +their flexibility, conceptual simplicity, and competitive results (Real et al., 2019), but the +weight optimization is typically left to standard SGD-based approaches. +Evolutionary NAS algorithms work by iteratively updating a population of architectures. +In each step, one or more “parent” architectures in the population are sampled (typically +based on the validation accuracy of the architectures), combined and mutated to create new +“children” architectures. These architectures are then trained and added to the population, +replacing individuals in the population with worse performance. See Algorithm 2. +There are many other ways in which evolutionary algorithms differ, including sampling +the initial population, selecting the parents, and generating the children. +For selecting +the initial population, approaches include using trivial architectures (Real et al., 2017), +randomly sampling architectures from the search space (Real et al., 2019; Sun et al., 2019), +or using hand-picked high-performing architectures (Fujino et al., 2017). +Selecting parents from the population makes up one of the core components of the +evolutionary algorithm. Perhaps the most popular method to sample parents is tournament +selection (Almalaq and Zhang, 2018; Goldberg and Deb, 1991; Real et al., 2017, 2019; +Sun et al., 2019, 2020), which selects the best architecture(s) out of a randomly sampled +population. Other common approaches include random sampling weighted by fitness (Gibb +et al., 2018; Loni et al., 2020; Song et al., 2020; Xie and Yuille, 2017), or choosing the current +best architecture(s) as parents (Elsken et al., 2017; Suganuma et al., 2017, 2018). These +methods trade off exploration vs. exploiting the best region found so far. One particularly +successful evolutionary algorithm is regularized evolution by Real et al. (2019). This is a +fairly standard evolutionary method, with the novelty of dropping the architecture in each +step that has been in the population for longest, even if it has the highest performance. This +method outperformed random search and RL in a head-to-head comparison and achieved +state-of-the-art performance on ImageNet at the time of its release (Real et al., 2019). +13 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Algorithm 3 General Bayesian Optimization NAS Algorithm +Input: Search space A, number of iterations T, acquisition function φ. +Randomly sample and train a population of architectures from the search space A. +for t = 1, . . . , T do +Train a surrogate model based on the current population. +Select architecture at by maximizing φ (a) , based on the surrogate model. +Train architecture at and add it to the current population. +end for +Output: Architecture from the population with the highest validation accuracy. +3.4 Bayesian Optimization +Bayesian optimization (BO, see, e.g. Frazier (2018) or Garnett (2023)) is a powerful method +for optimizing expensive functions, and it has seen significant success within NAS. There +are two key components to BO: (1) building a probabilistic surrogate to model the unknown +objective based on past observations, and (2) defining an acquisition function to balance +the exploration and exploitation during the search. BO is an iterative algorithm which +works by selecting the architecture that maximizes the acquisition function (computed us- +ing the surrogate), training this architecture, and retraining the surrogate using this new +architecture to start the next iteration. See Algorithm 3. +Initial BO-based NAS techniques developed custom distance metrics among architec- +tures, for example, with a specialized architecture kernel (Swersky et al., 2014), an opti- +mal transport-inspired distance function (Kandasamy et al., 2018), or a tree-Wasserstein +distance function (Nguyen et al., 2021), allowing a typical Gaussian process (GP) based +surrogate with BO. However, using a standard GP surrogate often does not perform well +for NAS, as search spaces are typically high-dimensional, non-continuous, and graph-like. +To overcome this, one line of work first encodes the architectures, using encodings discussed +in Section 2.6, and then trains a model, such as a tree-Parzen estimator (Bergstra et al., +2011; Falkner et al., 2018), random forest (Hutter et al., 2011; Ying et al., 2019), or neural +network (Springenberg et al., 2016; White et al., 2021a). Another line of work projects +architecture information into a low-dimensional continuous latent space on which conven- +tional BO can be applied effectively (Ru et al., 2020b; Wan et al., 2022a). Another class +of surrogate models use graph neural networks (Ma et al., 2019; Ru et al., 2021; Shi et al., +2020) or a graph-based kernel (Ru et al., 2021) to naturally handle the graph representation +of architectures without the need for an explicit encoding. +The acquisition function, which trades off exploration and exploitation during the search, +is another important design component for BO. There are various types of acquisition func- +tions used in NAS, such as expected improvement (Jones et al., 1998; Moˇckus, 1975), upper +confidence bound (Cox and John, 1992; Srinivas et al., 2010) and information-theoretic ones +(Hennig and Schuler, 2012; Hern´andez-Lobato et al., 2014; Hvarfner et al., 2022; Wang and +Jegelka, 2017). In NAS, optimizing the acquisition function in each round of BO is chal- +lenging due to the non-continuous search spaces, and furthermore, exhaustively evaluating +acquisition function values on all possible architectures is computationally non-viable. The +most common method for optimizing the acquisition function in NAS is by randomly mu- +tating a small pool of the best architectures queried so far, and of the mutated architectures, +14 + +Neural Architecture Search: Insights from 1000 Papers +selecting the one(s) with the highest acquisition function value (Kandasamy et al., 2018; +Ma et al., 2019; Ru et al., 2021; Schneider et al., 2021; Shi et al., 2020; White et al., 2021a). +Other methods for optimizing the acqusition function include local search, evolutionary +search, and random search (Ru et al., 2021; Shi et al., 2020; Ying et al., 2019). +3.5 Monte Carlo Tree Search +Another class of NAS methods is based on Monte Carlo Tree Search (MCTS). MCTS is the +key backbone search algorithm used in AlphaGO (Silver et al., 2016) and AlphaZero (Silver +et al., 2017), which achieve super-human performance in Go and chess, respectively. MCTS +finds optimal decisions by recursively sampling new decisions (e.g., making a move in chess, +or selecting an operation for an architecture in NAS), running stochastic rollouts to obtain +the reward (such as winning a chess game, or discovering a high-performing architecture) +and then backpropagating to update the weight of the initial decision. Across iterations, +the algorithm builds a decision tree to bias the search towards more promising regions by +balancing exploration and exploitation in decision making (Browne et al., 2012). +MCTS was first applied to NAS by Negrinho and Gordon (2017) who represented the +search space and its hyperparameters using a modular language. This results in a tree- +structured, extensible search space, contrary to the fixed search spaces of prior work. Wis- +tuba (2018) introduced a similar method but with two different UCT (Upper Confidence +bounds applied to Trees) algorithms. MCTS was first adapted to cell-based search spaces by +using a state-action representation (Wang et al., 2018). The authors also improved sample +efficiency by using a neural network to estimate the accuracy of sampled architectures, thus +enabling a higher number of rollouts. This was followed up by adding further efficiency +in pruning the tree by learning partitionings (Wang et al., 2020b), and by application to +multi-objective NAS (Zhao et al., 2021a). +4. One-Shot Techniques +Throughout Section 3, we have seen that the predominant methodology in the early stages +of NAS research was to iteratively sample architectures from the search space, train them, +and use their performance to guide the search. The main drawback of these methods, when +applied without speedup techniques, is their immense computational cost, sometimes on +the order of thousands of GPU days (Real et al., 2019; Zoph and Le, 2017) due to the need +to train thousands of architectures independently and from scratch.3 +As an alternative, one-shot techniques were introduced to avoid training each architec- +ture from scratch, thus circumventing the associated computational burden. As of 2022, +they are currently one of the most popular techniques in NAS research. Rather than train- +ing each architecture from scratch, one-shot approaches implicitly train all architectures in +the search space via a single (“one-shot”) training of a hypernetwork or supernetwork. +A hypernetwork is a neural network which generates the weights of other neural net- +works (Schmidhuber, 1992), while a supernetwork (often used synonymously with “one-shot +3. On the other hand, recent developments in performance estimation and speed-up techniques (Section 5) +have significantly improved the computational overhead of methods that use black-box optimization as +a base, making these methods affordable for many applications and users. +15 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +All operation +candidates +Supernet +… +Subnet +Figure 5: A supernet comprises all possible architectures in the search space. Each archi- +tecture is a subnetwork (subgraph) in the supernet. +model” in the literature) is an over-parameterized architecture that contains all possible ar- +chitectures in the search space as subnetworks (see Figure 5). The idea of a supernetwork +was introduced by Saxena and Verbeek (2016) and was popularized in 2018 by works such +as Bender et al. (2018), Pham et al. (2018), and Liu et al. (2019c). +Once a supernet is trained, each architecture from the search space can be evaluated by +inheriting its weights from the corresponding subnet within the supernet. The reason for +the scalability and efficiency of supernets is that a linear increase in the number of candidate +operations only causes a linear increase in computational costs for training, but the number +of subnets in the supernet increases exponentially. Therefore, supernets allow us to train +an exponential number of architectures for a linear compute cost. +A key assumption made in one-shot approaches is that when using the one-shot model to +evaluate architectures, the ranking of architectures is relatively consistent with the ranking +one would obtain from training them independently. The extent to which this assumption +holds true has been substantially debated, with work showing evidence for (Li et al., 2021c; +Pham et al., 2018; Yu et al., 2020) and against (Pourchot et al., 2020; Sciuto et al., 2020; +Zela et al., 2020b; Zhang et al., 2020b) the claim across various settings. The validity of the +assumption is dependent on the search space design, the techniques used to train the one- +shot model, and the dataset itself, and it is hard to predict to what degree the assumption +will hold in a particular case (Sciuto et al., 2020; Zhang et al., 2020b). +While the supernet allows quick evaluation of all architectures, we must still decide on a +search strategy, which can be as simple as running a black-box optimization algorithm while +the supernet is training (such as in Pham et al. (2018)) or after the supernet is trained (such +as in Bender et al. (2018)). We discuss these families of techniques in Section 4.1. A popular +line of work uses gradient descent to optimize the architecture hyperparameters in tandem +with training the supernet (such as DARTS (Liu et al., 2019c) and numerous subsequent +methods). We discuss this family of techniques in Section 4.2. Finally, in Section 4.3, we +discuss hypernetworks. Figure 6 provides a taxonomy of one-shot families. +16 + +Neural Architecture Search: Insights from 1000 Papers +Hypernetwork Methods +e.g. SMASH, GHNN +Non-Differentiable +Optimization +e.g. OFA +Supernetwork Methods +e.g. DARTS, OFA +DARTS “fixes”: +Operation Biases +e.g. DARTS-PT +Rank Disorder +e.g. SGAS +High Memory +e.g. PC-DARTS +Poor Generalization +e.g. Robust-DARTS +Differentiable +Optimization +e.g. DARTS +One-Shot +Methods +Figure 6: A taxonomy of the predominant one-shot families. A hypernetwork is a neural +net which generates the weights of other neural nets. A supernetwork is an over- +parameterized neural net that contains the set of neural nets from the search space +as subnetworks, and it can be used with differentiable optimization (including +DARTS and follow-ups), or non-differentiable optimization. +4.1 Non-Differentiable Supernet-Based Methods +We start by describing supernet-based methods which do not make use of differentiable +optimization. Some methods in this family decouple the supernet training and architecture +search: first train a supernet, and then run a black-box optimization algorithm to search +for the best architecture. Other methods train a supernet while simultaneously running a +non-differentiable search algorithm, such as reinforcement learning, to select subnetworks. +Bender et al. (2018), Li and Talwalkar (2019), and Guo et al. (2020b) propose simple +methods to train the supernet and then use a black-box optimization algorithm to extract +the best architecture from it. +Bender et al. (2018) construct the supernet by creating +a separate node corresponding to an operation, in every place where there is a choice +of operation; they then train the supernet as if it were a standard neural net, with one +exception: nodes are randomly dropped during training, with the level of dropout increasing +linearly throughout training. In follow-up work, Li and Talwalkar (2019) and Guo et al. +(2020b) take this idea a step further: in each training step, they randomly sample one +architecture and only update the weights of the supernet corresponding to that architecture. +These techniques better mimic what is happening at evaluation time: only a subnetwork is +evaluated rather than the entire supernet. Furthermore, these procedures use significantly +less memory than training all the weights of a supernet. Each method concludes by using the +trained supernet to quickly evaluate architectures when conducting random search (Bender +et al., 2018; Li and Talwalkar, 2019) or evolutionary search (Guo et al., 2020b). +The +architecture identified in the end is then trained from scratch. +As will be discussed in Section 6.2, deploying neural nets in practice often comes with +constraints on latency or memory. While the supernets considered thus far tend to only +contain architectures of approximately the same size, Cai et al. (2020) propose a supernet +containing subnetworks of various sizes. This Once-for-all (OFA) approach uses a progres- +sive shrinking strategy which starts by sampling the largest subnetworks, and then moving +17 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Algorithm 4 DARTS - Differentiable Architecture Search +Input: Search space A, number of iterations T, hyperparameter ξ. +Randomly initialize a one-shot model based on A with weights w and architecture hy- +perparameters α. +for t = 1, . . . , T do +Perform a gradient update on the architecture weights α according to Equation 1. +Perform a gradient update on w according to ∇wLtrain(w, α). +end for +Output: Derive the final architecture by taking the argmax of α, across all operation +choices, and then retrain this architecture from scratch. +to smaller subnetworks, in order to minimize the co-adaptation among subnetworks and +effectively train networks of different sizes “once for all”. In a subsequent search phase, +architectures are selected based on different constraints on latency and memory. While +Cai et al. (2020) uses random search for this search phase, Guo et al. (2020b) proposed to +improve this approach further by using evolutionary search in the search phase. +One of the earliest supernet-based approaches is ENAS (Efficient Neural Architecture +Search) (Pham et al., 2018), which trains the supernet while running a search algorithm +in tandem. Specifically, the search strategy is similar to the RL controller-based approach +from Zoph and Le (2017) (described in Section 3.2) but estimates the performance of each +architecture using a supernet. The training procedure alternates between selecting an archi- +tecture, evaluating it, and updating the weights of the supernet, and updating the weights +of the controller by sampling several architectures to estimate the reward of REINFORCE. +While this approach searches for an architecture in tandem with training the supernet, it +uses a separate controller network to guide the search. +In the next section, we discuss +methods which conduct the search via gradient descent using only the supernet. +4.2 Differentiable Supernet-Based Methods +In this section, we review supernet-based NAS methods that employ differentiable optimiza- +tion techniques. We first describe the seminal DARTS (Differentiable Architecture Search) +approach by Liu et al. (2019c), and then we move to various follow-up works and other +differentiable approaches. +The DARTS approach uses a continuous relaxation of the discrete architecture search +space, which enables the use of gradient descent in order to find a high-performing local +optimum significantly faster than black-box optimization methods. It can be applied to any +DAG-based search space which has different choices of operations on each edge by using a +“zero” operation to simulate the absence of an edge. +At the start, each edge (i, j) in the DARTS search space consists of multiple possible +candidate operations o, each of which are associated with a continuous hyperparameter +α(i,j) +o +∈ [0, 1]. While the supernet is training, edge (i, j) consists of a mix of all candidate +operations, weighted by each α(i,j) +o +. +The architecture hyperparameters α are optimized +jointly with the supernet model weights w via alternating gradient descent. In particular, +in order to update the architecture weights α via gradient descent, DARTS makes use of +18 + +Neural Architecture Search: Insights from 1000 Papers +Joint Optimization of +Weights and Architecture +Hyperparameters +Operation +candidates +Output +Input +… +x N +Discretization +Output +Input +… +x N +Randomly Initialized Architecture +Hyperparameters +Output +Input +… +x N +Re-training From Scratch +… +x > N +Input +Output +Figure 7: Differentiable one-shot NAS algorithms have four main steps: randomly initializ- +ing the architecture hyperparameters, optimizing the architecture hyperparame- +ters and weights via alternating gradient descent, discretizing the optimized archi- +tecture hyperparameters, and re-training the resulting subnetwork from scratch. +the following approximation: +∇αLval (w∗(α), α) ≈ ∇αLval (w − ξ∇wLtrain(w, α), α) , +(1) +where Ltrain denotes the training loss, Lval denotes the validation loss, ξ is the learning +rate, and w∗(α) denotes the weights that minimize the training loss of the architecture +corresponding to α. In other words, in order to avoid the expensive inner optimization, +w∗(α) is approximated by a single step of gradient descent (w − ξ∇wLtrain(w, α)). This is +similar to MAML (Finn et al., 2017) and other works (Luketina et al., 2016; Metz et al., +2017). Although this strategy is not guaranteed to converge, Liu et al. (2019c) showed +that it works well in practice with a suitable choice of ξ. After the training phase, DARTS +obtains a discrete architecture by selecting the operation with the maximum value of α on +each edge (the discretization step) and then re-trains it from scratch. Figure 7 provides an +illustration of DARTS. +DARTS gained significant attention in the AutoML community due to its simplicity, +its novelty, and the release of easy-to-use code. Furthermore, the original technique left +room for improvement across various axes. Consequently, there has been a large body of +follow-up work seeking to improve various parts of the DARTS approach. In the rest of the +section, we cover the main categories of improvements (see Figure 6). +4.2.1 Rank Disorder +As mentioned at the start of Section 4, nearly all one-shot methods make a key assumption: +the ranking of architectures evaluated with the supernet is relatively consistent with the +ranking one would obtain from training them independently; when this assumption is not +19 + +VαLval (w*(α), α) ~ VαLval (w - $VwLtrain(w, α), α)(i,j) +maxWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +met, it is known as rank disorder (Li et al., 2021c; Sciuto et al., 2020). While there is +considerable debate both for (Li et al., 2021c; Pham et al., 2018; Yu et al., 2020) and +against (Pourchot et al., 2020; Sciuto et al., 2020; Zela et al., 2020b; Zhang et al., 2020b) +the assumption, many works have attempted to reduce the problem of rank disorder. +Several methods propose to gradually increase the network depth, or to gradually prune +the set of operation candidates during training, showing that this causes the weights to +better adapt to the most-promising operation choices. Progressive-DARTS (Chen et al., +2019a) gradually increases the network depth while simultaneously pruning the operations +with the smallest weights. SGAS (Li et al., 2020a) chooses operations throughout the train- +ing procedure, based on two criteria: selection certainty (calculated via the entropy of the +operation distribution) and selection stability (calculated via the movement of the operation +distribution). Finally, XNAS (Nayman et al., 2019) makes use of the exponentiated gradi- +ent algorithm (Kivinen and Warmuth, 1997), which dynamically prunes inferior operation +choices during the search while also allowing the recovery of “late bloomers”, i.e., operation +choices which only become accurate later in the training procedure. +4.2.2 Operation Biases +Several works show that differentiable NAS techniques tend to favor skip connections over +other operation choices (Liang et al., 2019; Wang et al., 2021; Zela et al., 2020a), which +might be caused by the supernet using skip connections to over-compensate for vanishing +gradients (Chu et al., 2021). Various methods have been proposed to fix this bias. +DARTS+ (Liang et al., 2019) proposes an early stopping method based on the stability +of the ranking of the architecture weights, while DARTS− (Chu et al., 2021) separates +the skip connection weights from other operation weights via auxiliary edges. FairDARTS +(Chu et al., 2020) sets all operation weights independent of all others, and then pushes +these architecture weights toward zero or one in the loss function. +Taking a different approach, Wang et al. (2021) show that it is okay for skip connections +to have higher weights, as long as we do not select the final architecture based on these +weights. +Instead, after training the supernet, their algorithm, DARTS-PT, selects each +operation whose removal has the largest decrease of accuracy in the supernet. +Rather than fixing the biases among a small hand-picked set of operations, Shen et al. +(2022) instead use a search space that significantly reduces human bias: they fix a standard +convolutional network and search for the kernel sizes and dilations of its operations. This +simple approach is broadly applicable across computer vision, PDE solving, protein folding, +and other tasks. In order to make one-shot training more efficient, their algorithm, DASH, +computes the mixture-of-operations using the Fourier diagonalization of convolution. +4.2.3 Poor Test Generalization +Several works seek to improve the generalization performance of DARTS through various +means. Zela et al. (2020a) and Chen and Hsieh (2020) show that DARTS often converges to +sharp local minima in the loss landscape (high validation loss curvature in the architecture +hyperparameter space), which, after running the discretization step, can cause the algo- +rithm to return an architecture with poor test generalization. Robust-DARTS (Zela et al., +2020a) fixes this issue by making the training more robust through data augmentation, L2 +20 + +Neural Architecture Search: Insights from 1000 Papers +regularization of the inner objective Ltrain, and early stopping. Similarly, rather than op- +timizing the training loss, Smooth-DARTS (Chen and Hsieh, 2020) optimizes the expected +or worst-case training loss over a local neighborhood of the architecture hyperparameters. +Taking a different approach, GAEA (Li et al., 2021c), XD (Roberts et al., 2021), and +StacNAS (Guilin et al., 2019) all use a single-level optimization rather than the typical +bi-level optimization, by treating the architecture hyperparameters as normal architecture +weights, showing this leads to better generalization. Furthermore, GAEA re-parameterizes +the architecture parameters over the simplex and updates them using the exponentiated +gradient algorithm (similar to XNAS from Section 4.2.1), showing this is better-suited to +the underlying geometry of the architecture search space. +Finally, Amended-DARTS (Bi et al., 2019) and iDARTS (Zhang et al., 2021a) both take +the approach of deriving more accurate approximations of the gradients of α (Equation 1), +showing that this leads to a more stable optimization and better generalization. +4.2.4 High Memory Consumption +The memory required to train a supernet is much higher than a normal neural net—it +scales linearly with the size of the set of candidate operations. Recall from Section 4.1 that +multiple works reduced this memory by, in each training step, masking out all operations +except for the ones corresponding to one or a few subnetworks. Various works have proposed +techniques to mask out operations for differentiable NAS as well, i.e., while simultaneously +optimizing the architecture hyperparameters. +Cai et al. (2019) proposed ProxylessNAS, which solves this problem by modifying the +BinaryConnect (Courbariaux et al., 2015) discretization method: in each training step, for +each operation choice, all are masked out except one operation that is randomly chosen +with probability proportional to its current value of α. Cai et al. (2019) show that this +procedure converges to a single high-performing subnetwork. +GDAS (Dong and Yang, +2019) and DSNAS (Hu et al., 2020; Xie et al., 2018) use a Gumbel-softmax distribution +over a one-hot encoding of the operation choices, which is a different way to allow sampling +single operations in each training step while maintaining differentiability. +PC-DARTS (Xu et al., 2019b) proposes a relatively simpler approach: at each training +step, and for each edge in the DAG, a subset of channels is sampled and sent through +the possible operations, while the remaining channels are directly passed on to the output. +While reducing memory due to training fewer channels, this also acts as a regularizer. +DrNAS (Chen et al., 2021f) also reduces memory consumption by progressively increasing +the number of channels that are forwarded to the mixed operations, and progressively +pruning operation choices, modeled by a Dirichlet distribution. +4.3 Hypernetworks +A hypernetwork is a neural network which generates the weights of other neural networks. +Hypernetworks were first considered by Schmidhuber (1992, 1993), and the first modern +application was by Ha et al. (2017), who used them to obtain better weights for a fixed +LSTM architecture. Hypernetworks have since been used for a variety of tasks, including +HPO (Mackay et al., 2019; Navon et al., 2021), calibrating model uncertainty (Krueger +et al., 2017), and NAS (Brock et al., 2018; Zhang et al., 2018). +21 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +The first work to use hypernetworks for NAS (and among the first to use a one-shot +model for NAS) was SMASH (one-Shot Model Architecture Search through Hypernetworks) +(Brock et al., 2018). SMASH consists of two phases: first, train a hypernetwork to output +weights for any architecture in the search space. Next, randomly sample a large set of +architectures, generate their weights using the hypernetwork, and output the one with the +best validation accuracy. The hypernetwork, a convolutional neural net, takes as input an +architecture encoding and outputs a set of weights for that architecture, and is trained by +randomly sampling an architecture, generating its weights, computing its training error, +and then backpropagating through the entire system (including the hypernetwork weights). +Another hypernet-based NAS algorithm is GHN (Graph Hypernetworks) (Zhang et al., +2018). The main difference between SMASH and GHN is the architecture encoding and the +architecture of the hypernetwork. Specifically, the GHN hypernetwork is a mix between a +graph neural network and a standard hypernetwork. It takes as input the computational +graph of an architecture a and uses message-passing operations which are typical in GNNs, +to output the weights of a. The training of the hypernetwork, and the final NAS algorithm, +are both the same as in SMASH. +5. Speedup Techniques +In this section, we cover general speedup techniques for NAS algorithms, including per- +formance prediction (Section 5.1), multi-fidelity methods (Section 5.2), meta-learning ap- +proaches (Section 5.3), and weight inheritance (Section 5.4). +5.1 Performance Prediction +A large body of work has been devoted to predicting the performance of neural networks +before they are fully trained. Such techniques have the potential to greatly speed up the +runtime of NAS algorithms, since they remove the need to fully train each architecture under +consideration. These speedup techniques can improve nearly all types of NAS algorithms, +from black-box optimization (Ru et al., 2020a; White et al., 2021c) to one-shot NAS (Xiang +et al., 2021). In this section, we discuss the performance prediction techniques themselves, +while in Section 5.2, we discuss methods of incorporating them into NAS algorithms. +Formally, given a search space A and architecture a ∈ A, denote the final validation +accuracy obtained with a fixed training pipeline as f(a). A performance predictor f′ is +defined as any function which predicts the accuracy or relative accuracy of architectures, +without fully training them. In other words, evaluating f′(a) takes less time than evaluating +f(a) , and {f′(a) | a ∈ A} ideally has high correlation or rank correlation with {f(a) | a ∈ +A} . +In the rest of this section, we give an overview of different types of performance +predictors, including learning curve extrapolation (Section 5.1.1), zero-cost proxies (Section +5.1.2), and other methods (Section 5.1.3). Note that surrogate models (Section 3.4) and +one-shot models (Section 4) can also be seen as types of performance predictors. +5.1.1 Learning Curve Extrapolation +Learning curve extrapolation methods seek to predict the final performance of a given +architecture after partially training it, by extrapolating from its so-called partial learning +22 + +Neural Architecture Search: Insights from 1000 Papers +Learning Curve +Extrapolation +Zero-Cost Proxies +Subset Selection +Data +Weibull +Log log linear +Log power +Janoschek +Epochs +Accuracy +Figure 8: Illustration of the main types of performance predictors: extrapolating the vali- +dation accuracy learning curve via a parameteric model (left), assessing the gen- +eralizability of an architecture with a single forward pass of a single minibatch of +data (middle), and training the architeture on a subset of the data (right). +curve (the series of validation accuracies at all epochs so far). This can, e.g., be accomplished +by fitting the partial learning curve to a parametric model (Domhan et al., 2015) (see +Figure 8 (left)). Learning curve extrapolation methods can also be used together with a +surrogate model: in that case, the model takes as input both an encoding of a and a partial +learning curve of a, and outputs a prediction f′(a) (Baker et al., 2018; Klein et al., 2017). +Learning curve extrapolation methods can be used to speed up black-box NAS algorithms +(Domhan et al., 2015; Ru et al., 2020a; Yan et al., 2021b) or in conjunction with multi- +fidelity algorithms such as Hyperband or BOHB (described in Section 5.2). +5.1.2 Zero-Cost Proxies +Zero-cost proxies are a recently developed family of performance prediction techniques. The +idea is to run a very fast computation (such as a single forward and backward pass of a +single minibatch of data) over a set of architectures that assigns a score to each architecture, +with the hope that the scores are correlated with the final accuracies (Mellor et al., 2021). +These techniques get their “zero-cost” name since the overall time to score each architecture +is negligible (often less than 5 seconds) compared to most other performance prediction +techniques (Abdelfattah et al., 2021). While most zero-cost proxies compute architecture +scores from a (single) minibatch of data, some are data-independent, computing the score +solely from the initialized weights or number of parameters of the neural network. +Zero-cost proxies were first introduced by Mellor et al. (2021), who estimated the relative +performance of neural networks based on how well different linear regions of the network +map are separated (see Figure 8 (middle)). Since the initial technique, several new zero- +cost proxies have been introduced. Abdelfattah et al. (2021) made a connection to the +pruning-at-initialization literature (Lee et al., 2019b; Tanaka et al., 2020; Theis et al., 2018; +Wang et al., 2020a) and used this connection to introduce five zero-cost proxies. Their best- +performing method, synflow (Tanaka et al., 2020), is a data-independent method which +23 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +computes the L1 path-norm of the network: it computes the sum of the product of all +initialized weights in each path connecting the input to the output. +Since then, two other data-independent methods have been introduced, based on a series +of synthetic proxy tasks to test scale invariances and spatial information (Li et al., 2021d), +and based on approximating the neural network as a piecewise linear function (Lin et al., +2021). Other data-dependent methods make use of the neural tangent kernel (NTK) (Jacot +et al., 2018), based on approximating its trace norm (Shu et al., 2021) or approximating its +spectrum (Chen et al., 2021e). +Although zero-cost proxies have received significant attention since they were first in- +troduced, recent work has shown that simple baselines such as “number of parameters” and +“FLOPs” are surprisingly competitive with all leading techniques. The main downsides of +using zero-cost proxies are that they may be unreliable, especially on larger search spaces +(Chen et al., 2022; Ning et al., 2021; White et al., 2022). They also may have biases, such as +preferring larger models (Ning et al., 2021) or wide channels (Chen et al., 2022), although +the biases can be removed (Krishnakumar et al., 2022). +On the other hand, recent work encourages the viewpoint that zero-cost proxies are +“weak learners” which can be combined with other techniques, including other zero-cost +proxies, to improve performance (Krishnakumar et al., 2022; White et al., 2022). Initial +work shows that zero-cost proxies can be successfully added to both Bayesian optimization- +based NAS (Shen et al., 2021; White et al., 2021c) and one-shot NAS (Xiang et al., 2021). +5.1.3 Other Low-Fidelity Predictions +Beside training for fewer epochs, other works give a low-fidelity estimate of the final accuracy +by training on a subset of the training data (or a smaller, synthetically generated dataset). +This is visualized in Figure 8 (right). +Multiple works have studied different subset selection algorithms, such as random sam- +pling, entropy-based sampling (Na et al., 2021), clustering via core-sets (Shim et al., 2021), +facility location (Prasad et al., 2022), and k-center (Na et al., 2021). Prasad et al. (2022) +introduce adaptive subset selection to NAS, in which the subset is updated throughout +training in order to maximize validation accuracy. +Such et al. (2020) introduce generative teaching networks which use a small set of syn- +thetic data to train neural networks much faster than using the original real training data. +The synthetic data is created using a data-generating network to match the accuracy of a +network trained on real data. A related method is synthetic petri dish (Rawal et al., 2020), +which evaluates architecture motifs by placing them into a small neural network and then +training them using a small synthetic dataset. This latter method also explicitly optimizes +the correlation between architecture rankings with the approximation and the full training. +5.2 Multi-Fidelity Algorithms +While the previous section was devoted to methods of predicting the performance of neural +networks, now we cover algorithms that use these methods to run NAS efficiently. +Formally, the objective function f : X −→ R, which is typically expensive to fully eval- +uate, can be cheaply approximated by a lower-fidelity version ˆf(·, b) of f(·), parameterized +by the fidelity parameter b. When b = bmax, we retrieve the true function f(·) = ˆf(·, bmax). +24 + +Neural Architecture Search: Insights from 1000 Papers +This is a generalization of the definition from Section 5.1. The fidelity parameter can denote +the number of training epochs, training data subset size, and it can make use of perfor- +mance prediction techniques from the previous section. One can even use multiple fidelity +parameters at a time (Kandasamy et al., 2017; Zhou et al., 2020). Next, we describe the +optimization algorithms that exploit access to multi-fidelity function estimates ˆf(·, b). +SuccessiveHalving (SH) (Jamieson and Talwalkar, 2016) is one of the simplest multi- +fidelity algorithms. It starts to train a large number of architectures, slowly killing off more +and more architectures which are not promising based on lower fidelity evaluations, until +only the most promising architectures are evaluated at the highest fidelity. The fidelity +thresholds and number of architectures to promote to higher fidelities are controlled by a +hyperparameter. A popular improvement to SH is Hyperband (HB) (Li et al., 2018), a +multi-armed bandit strategy that repeatedly calls SH as a subroutine, using different values +of the minimum budget for each call. Therefore, HB hedges its bets against any single +choice of the minimum budget. +While SH and HB are purely based on (smart) random search, recent works have com- +bined HB with both Bayesian optimization and evolution. Bayesian optimization hyperband +(BOHB) (Falkner et al., 2018; Lindauer et al., 2022) works similarly to HB in its first iter- +ation, and on later iterations it fits a probabilistic surrogate model for each fidelity in order +to make informed sampling decisions. Similarly, DEHB (Mallik and Awad, 2021) combines +differential evolution (Storn and Price, 1997) with HB, significantly improving the later +iterations of HB. ASHA (Li et al., 2020c) and ABOHB (Klein et al., 2020) improve SH and +BOHB further, respectively, by making use of massively parallel asynchronous computation +and early stopping strategies. Finally, EcoNAS (Zhou et al., 2020) proposes a hierarchi- +cal evolutionary search method that partitions the search space into subsets and allocates +increasing fidelities to the most promising architectures in each subset. +5.3 Meta-Learning +A majority of NAS approaches consider solving a single task from scratch, ignoring previ- +ously explored solutions. However, this is in contrast to what both researchers and prac- +titioners typically do. Often, architectures are transferred across datasets and even across +tasks, and on a new task, researchers typically start with a state-of-the-art solution. So, +one might ask: why run NAS from scratch rather than re-using information from, e.g., pre- +vious experiments? This question naturally leads to the idea of meta-learning or learning +to learn (Hochreiter et al., 2001; Schmidhuber, 1987; Thrun and Pratt, 1998), which aims +at improving a learning algorithm by leveraging information from past, related experiments +(Hospedales et al., 2021; Vanschoren, 2019). +Wong et al. (2018) and Zimmer et al. (2021) employ meta-learning strategies in a more +general automated machine learning setting. Since the focus is not on NAS, they both solely +consider a small set of candidate architectures. In Wong et al. (2018), tasks are encoded in a +similar fashion as word embeddings in NLP (Mikolov et al., 2013). In contrast, Zimmer et al. +(2021) simply warm-start their search based on previously well-preforming configurations. +Lian et al. (2020) and Elsken et al. (2020) focus on few-shot learning: the problem of +learning a new task with just a few data points for training. The authors extend gradient- +based, model-agnostic meta-learning approaches such as MAML (Finn et al., 2017) and +25 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +REPTILE (Nichol et al., 2018) to not only meta-learning an initial set of weights for a +fixed neural network architecture, but also to the architecture itself by incorporating a +differentiable method such as DARTS (Liu et al., 2019c) into the meta-learning algorithm. +The work by Lee et al. (2021) is neither restricted to few-shot learning nor to choosing +architectures from a small set of candidates. Rather, they employ typical NAS search spaces +such as the ones discussed in Section 2. The authors propose a novel set encoder to improve +upon deep sets (Zaheer et al., 2017) and set transformers (Lee et al., 2019a). A graph neural +network-based decoder is employed to generate neural architectures given a set encoding. +Additionally, a graph neural network is employed to encode generated architectures. The +architecture encoding in combination with the set encoding is then used to meta-learn a +surrogate model to predict the performance of the architecture, dataset tuple. Shala et al. +(2022) extend the work by Lee et al. (2021) by employing the dataset and architecture +encodings within a Bayesian optimization framework, resulting in a probabilistic surrogate +predictor. This further enables adapting the surrogate to datapoints seen at test time. +5.4 Weight Inheritance and Network Morphisms +While black-box optimization-based NAS algorithms train each architecture from scratch, +and one-shot methods train all architectures with the same set of weights, a line of work +proposes an in-between solution: reuse the weights of trained architectures on similar un- +trained architectures. This idea is especially helpful for black-box optimization approaches +that apply only small, sequential changes to architectures when generating a new candidate +architecture. For example, Real et al. (2017) propose to copy the weights of all layers that +have not been affected by applied mutations from the parent architecture to its offspring. +This idea has also been extended by the concept of network morphisms (Chen et al., +2016; Wei et al., 2016). Network morphisms are operators acting on the space of neural +network architectures. They change the architecture of a neural network without changing +the function they represent, i.e., given an arbitrary input, the output remains identical for +the original architecture and the architecture having been modified by a network morphism. +This is typically achieved by properly initializing the modified architecture. Network mor- +phisms have been employed in evolutionary algorithms (Elsken et al., 2017, 2019a; Schorn +et al., 2020; Wistuba, 2019), reinforcement learning (Cai et al., 2018a,b), Bayesian opti- +mization (Jin et al., 2019b), and even one-shot methods (Fang et al., 2020). +6. Extensions +The previous sections studied the main techniques from the classic instantiation of NAS. In +this section, we survey a few common extensions: joint NAS + HPO, constrained/multi- +objective NAS, and neural ensemble search. +6.1 Joint NAS + HPO +While a large body of the NAS literature assumes fixed hyperparameters in their experimen- +tal setup, it has been shown – perhaps not very surprisingly – that hyperparameters also +play a significant role. For example, on the DARTS search space, tuning hyperparameters +can lead to a huge improvement, exceeding the performance gains obtained by NAS (Yang +26 + +Neural Architecture Search: Insights from 1000 Papers +et al., 2020). However, the best hyperparameters may vary significantly across architectures +even in the same search space (Yang et al., 2020). Therefore, a recent body of work seeks to +overcome these challenges and give efficient algorithms for NAS + HPO (Dai et al., 2021; +Dong et al., 2020; Izquierdo et al., 2021; Zela et al., 2018; Zhou et al., 2021). +Running joint NAS + HPO is significantly more challenging than running NAS or HPO +in isolation. First, the complexity of the search space is substantially increased, due to the +increased number of hyperparameters and the heterogeneity of the hyperparameters. Sec- +ond, the interaction between architectures and training hyperparameters in terms of network +performance is difficult to model. Furthermore, some hyperparameters can have different +effects on the performance under different evaluation budgets, reducing the effectiveness of +many multi-fidelity and performance prediction techniques. +In light of these challenges, several solutions have been proposed. Various methods have +been introduced to homogenize the search space, such as reformulating NAS as an HPO +problem with categorical hyperparameters (Zela et al., 2018), or standardizing the repre- +sentation of the NAS and HPO hyperparameters by assigning continuous-valued coefficients +in [0, 1] (Dong et al., 2020). The search strategies resemble standard NAS algorithms such +as BO (Dai et al., 2021; Izquierdo et al., 2021; Zela et al., 2018), evolution (Dai et al., 2021; +Izquierdo et al., 2021), or REINFORCE with weight sharing (Dong et al., 2020). +6.2 Constrained and Multi-Objective NAS +Although NAS has been very popular in recent years, most work focuses on solely optimizing +for a single objective, typically the accuracy or error rate. However, there are many settings +for which this is not sufficient, such as when the neural network must be deployed on an +edge device or must satisfy a legal definition of fairness. +In such applications, we may +need to constrain the latency, memory usage, or rate of errors across classes (Sukthanker +et al., 2022). There has been particular interest in constraints related to edge devices and +other hardware, termed hardware-aware NAS (Benmeziane et al., 2021). To achieve one or +more objectives in addition to accuracy, the standard NAS objective is typically modified +to either a constrained optimization problem (e.g., Bender et al. (2020); Cai et al. (2019); +Tan et al. (2019)) or a multi-objective optimization problem (e.g., Elsken et al. (2019a); Hu +et al. (2019); Izquierdo et al. (2021); Lu et al. (2019, 2020)). +In constrained optimization, one tries to solve the following equation: +min +a∈A f(a) subject to hi(a) ≤ ci for i ∈ {1, . . . , k} +(2) +where f(a) denotes, as before, the original objective function (e.g., validation error), and +hi represent hardware constraints as a function of the architecture. This problem is often +solved by a transform into an additive or multiplicative unconstrained problem such as +mina∈A f(a)+� +i λigi(a) with penalty functions gi penalizing architectures not satisfying the +constraints, e.g., gi(a) = max +� +0, hi(a)−ci +� +and hyperparamters λi trading off the objectives +and constraints. This single-objective optimization problem is then solved using black-box +optimization methods or one-shot methods. In the latter case, the penalty functions gi +needs to be differentiable, which is often not the case. Therefore, discrete metrics such +as latency are relaxed to continuous variables through various techniques, such as with a +Gumbel softmax function (Wu et al., 2019b). +27 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +In multi-objective optimization, the requirements in Equation 2 are treated as separate +objectives that are optimized along with the original objective: +min +a∈A +� +f(a), h1(a), . . . , hk(a) +� +. +While this can again be reduced to a single-objective problem via scalarization methods, +another common approach is to search for a set of non-dominated solutions that are op- +timal in the sense that one cannot reduce any objective without increasing at least one +other objective. The set of non-dominated solutions is called the Pareto front. The most +common approach in this case is to employ multi-objective evolutionary algorithms which +maintain a population of architectures and aim to improve the Pareto front obtained from +the current population by evolving the current population (Elsken et al., 2019a; Hu et al., +2019; Izquierdo et al., 2021; Lu et al., 2019). Multi-objective evolutionary algorithms have +also been used in combination with weight sharing within one-shot models (Lu et al., 2020; +Mu˜noz et al., 2022). +One of the most widely-studied constrained NAS problems is regarding hardware effi- +ciency such as memory or latency, and many works have been devoted to efficiently approx- +imating hardware metrics of interest. While simple metrics such as number of parameters +are easily computed, these are often not correlated enough with other metrics of interest +such as memory or latency. Other solutions include computing hardware costs modularly +as the sum of the hardware cost of each operation (Cai et al., 2019) or by using a surrogate +model that predicts hardware costs (Dudziak et al., 2020; Laube et al., 2022). +6.3 Neural Ensemble Search +While the goal of neural architecture search is to return the best standalone architecture, +ensembling methods are popular within the deep learning community for their robust pre- +dictions and their easy uncertainty quantification. A newly emerging extension of NAS +is concerned with finding the best ensemble of neural networks with diverse architectures, +which can outperform standard NAS in terms of accuracy, uncertainty calibration, and +robustness to dataset shift (Zaidi et al., 2021). Neural ensemble search is defined as follows: +min +a1,...,aM∈ALval (Ensemble ((w∗(a1), a1), . . . , (w∗(aM), aM))) +(3) +s.t. +w∗(a) = argminw Ltrain (w, a) +∀a ∈ A, +where Ensemble is the function which aggregates the outputs of f1, . . . , fM. Note that the +search space cardinality is |A|M rather than |A| as in standard NAS. +Zaidi et al. (2021) propose two simple yet effective procedures based on random search +and regularized evolution (Real et al., 2019) that search for architectures that optimize +Equation 3. Despite their effectiveness, these algorithms take considerable computation +due to the black-box nature of the optimization algorithms. Multi-headed NES (Narayanan +et al., 2021) circumvents this issue by applying differentiable NAS methods on the heads +of a multi-headed network. The heads are explicitly tuned to optimize the ensemble loss +together with a diversity component that encourages uncorrelated predictions coming from +the individual heads. +Other works have set up neural ensemble search with a one-shot +model for the entire architecture. NESBS (Neural Ensemble Search via Bayesian Sampling) +28 + +Neural Architecture Search: Insights from 1000 Papers +(Shu et al., 2022) propose to use a supernet to estimate the ensemble performance of inde- +pendently trained base learners and then use Bayesian sampling to find a high-performing +ensemble. +NADS (Neural Architecture Distribution Search) (Ardywibowo et al., 2020) +follows a similar line by training a supernet to optimize an objective that is tailored to +provide better uncertainty estimates and out-of-distribution detection. Chen et al. (2021b) +run evolutionary search on the supernet to find a high-performing ensemble. +7. Applications +Along with discovering improved architectures for well-known datasets, one of the primary +goals of the field of NAS is to quickly and automatically find high-performing architectures +for brand new datasets and tasks. Although the majority of the NAS literature focuses +on image classification, there are numerous success stories for NAS applied to less well- +known settings. In this section, we discuss a few of these successes, including graph neural +networks, generative adversarial networks, dense prediction, and transformers. +7.1 Graph Neural Networks +Graph neural networks (GNNs) are designed to process data represented by graphs. Using +NAS to design GNNs poses unique problems: the search space for GNNs is more complex +than typical convolutional search spaces, and both NAS and GNNs are independently known +for their large computational overhead. +Zhou et al. (2019) initiated a line of work applying NAS to GNNs by defining a new +search space with GNN-specific operations and then using a reinforcement learning strategy. +Follow-up work designed similar search spaces (Gao et al., 2020b; Zhang et al., 2021b). +with specialized features such as meta-paths (Ding et al., 2021b), edge features (Jiang and +Balaprakash, 2020), or fast sampling operations (Gao et al., 2020b). +Overall, the main difference between NAS for GNNs and more standard NAS settings +lies in the construction of the search space. The main search strategies used by GNN NAS +algorithms are typical NAS approaches: reinforcement learning (Gao et al., 2020b; Zhao +et al., 2020a; Zhou et al., 2019), one-shot methods (Ding et al., 2021b; Zhao et al., 2020b), +and evolutionary algorithms (Jiang and Balaprakash, 2020; Nunes and Pappa, 2020). For +a detailed survey on NAS for GNNs, see Zhang et al. (2021b). +7.2 Generative Adversarial Network +Generative adversarial networks (GANs) (Goodfellow et al., 2014) are a popular choice for +generative modeling in tasks such as computer vision. GANs make use of two separate +networks training in tandem: a generator and a discriminator. Due to having two separate +networks, and their notoriously brittle training dynamics (Gulrajani et al., 2017), GANs +require special techniques for effective NAS. +Different works have achieved improved performance via NAS by searching for only +the generator architecture with a fixed discriminator (Doveh and Giryes, 2021), with a +predefined progressively growing discriminator (Fu et al., 2020), or by searching both the +generator and discriminator architectures simultaneously (Gong et al., 2019). The most +popular choice of search space is the cell-based search space. The cell for the generator +29 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +consists of a standard convolutional cell, with the addition of various upsampling operations +(Ganepola and Wirasingha, 2021; Gong et al., 2019; Tian et al., 2020). +The search techniques resemble the techniques used for standard NAS: reinforcement +learning (Fu et al., 2020; Tian et al., 2020; Wang and Huan, 2019), one-shot NAS (Doveh and +Giryes, 2021; Gao et al., 2020a; Lutz et al., 2018), and evolutionary algorithms (Kobayashi +and Nagao, 2020), with scoring based on either Inception Score (IS) (Salimans et al., 2016) +or Fr´echet Inception Distance (FID) (Heusel et al., 2017). For a comprehensive survey on +NAS for GANs, see Ganepola and Wirasingha (2021). +7.3 Dense Prediction Tasks +Dense prediction for computer vision encompasses a variety of popular tasks such as seman- +tic segmentation, object detection, optical flow, and disparity estimation, and it requires +more complex architectures compared to standard image classification problems. For ex- +ample, the architectures often include a decoder (Ronneberger et al., 2015), modules for +generating multi-scale features (He et al., 2015) or task-specific heads (Girshick et al., 2014) +in addition to the main network. Thus, NAS algorithms have been applied to search for +these components, either in isolation (Chen et al., 2018; Ghiasi et al., 2019; Xu et al., 2019a) +or jointly (Guo et al., 2020a; Yao et al., 2020), or by discovering novel design patterns (Du +et al., 2020). For a survey on NAS for dense prediction, see Elsken et al. (2022). +Once again, standard NAS techniques are used: Guo et al. (2020a); Liu et al. (2019a); +Saikia et al. (2019); Xu et al. (2019a) employ gradient-based search via DARTS (Liu et al., +2019c); Du et al. (2020); Ghiasi et al. (2019) use RL; Bender et al. (2020) is inspired by +ProxylessNAS (Cai et al., 2019) and ENAS (Pham et al., 2018). +Methods for dense prediction tasks (e.g., Bender et al. (2020); Chen et al. (2019b); +Guo et al. (2020a); Shaw et al. (2019); Wu et al. (2019a)) typically build search spaces +based on state-of-the-art image classification networks, with task-specific components from +well-performing dense prediction architecture components. +As many approaches fix the +backbone and only search for other task-specific components of the architecture, they often +employ pre-trained backbone architectures (Chen et al., 2020; Guo et al., 2020a) or even +cache the features generated by a backbone (Chen et al., 2018; Nekrasov et al., 2019; +Wang et al., 2020c) to speed up architecture search. +Chen et al. (2018); Ghiasi et al. +(2019) also use a down-scaled or different backbone architecture during the search process. +Methods also sometimes employ multiple search stages, with the goal of first eliminating +poorly performing architectures (or parts of the search space) and successively improving +the remaining architectures (Du et al., 2020; Guo et al., 2020a). +Overall, while it is much harder to run NAS on dense prediction tasks compared to +image classification tasks because of the computational demands of dense prediction, there +has been a rapid increase in developments with the rise of computationally efficient one-shot +NAS methods. While efforts thus far have focused on semantic segmentation and object +detection, avenues for future work include disparity estimation, panoptic segmentation, 3D +detection and segmentation, and optical flow estimation. +30 + +Neural Architecture Search: Insights from 1000 Papers +7.4 Transformers +Transformers were proposed by Vaswani et al. (2017) to help with the issue of longer se- +quences that RNNs had difficulty modeling, by using self-attention and cross-attention +mechanisms such that each token’s representation in an input sequence is computed from a +weighted average of the representation of all other tokens. The core transformer design was +introduced for machine translation, but it has found widespread usage in causal language +modeling (Brown et al., 2020; Radford et al., 2019), masked language modeling (Clark et al., +2020; Devlin et al., 2019; Liu et al., 2019d), and more recently, computer vision (Dosovitskiy +et al., 2021; Liu et al., 2021b). Since its release, there have been many efforts to improve +transformers via NAS. The most common search strategies for transformers are evolutionary +(Chen et al., 2021c; So et al., 2019, 2021) or one-shot (Ding et al., 2021a; Gong et al., 2021; +Li et al., 2021a; Su et al., 2021) On the other hand, there is a huge variety of different search +spaces that have been tried recently, relative to other areas (e.g., in NAS for convolutional +architectures, the majority of works use cell-based search spaces). Overall, the field of NAS +for transformers has not converged to one “best” type of search space. Below, we survey +NAS methods for four types of transformers: decoder-only, encoder-only, encoder-decoder, +and vision transformers. See Chitty-Venkata et al. (2022) for an in-depth survey. +Decoder-only architectures, such as the GPT line of architectures (Brown et al., 2020; +Radford et al., 2019) directly consume the input text prompt and output the sequence of +text tokens that are most likely to follow. Primer (So et al., 2021) is a NAS algorithm +that makes use of evolutionary search on a large macro decoder-only search space. The +approach found two consistent improvements to the transformer block: squaring the ReLU +in the feedforward block in the transformer layer, and adding depthwise convolutions after +self-attention heads. +Encoder-only architectures, such as BERT (Devlin et al., 2019) encode the input text +into a representation which can be used for many kinds of downstream tasks. Multiple works +(Xu et al., 2021a, 2022; Yin et al., 2021) seek to discover compressed versions of BERT, in +which the desired latency and task are specified by the user. The typical approach is to +train a supernet on a standard self-supervised task (masked language modeling), which can +then be used to discover compressed models for a given language task. +Encoder-decoder architectures such as T5 (Raffel et al., 2020) are used in sequence- +to-sequence tasks such as machine translation, in which the source language is encoded +into a representation, which is then decoded into the target language. So et al. (2019) use +evolutionary search together with a new technique to dynamically allocate more resources +to more promising candidate models, while Zhao et al. (2021b) propose a DARTS-based +algorithm with a new technique for memory efficiency in backpropagation. Finally, KNAS +(Xu et al., 2021b) and SemiNAS (Luo et al., 2020) speed up the search using zero-cost +proxies and a surrogate transformer model, respectively. +A large variety of NAS algorithms have been studied for vision transformer search spaces, +with the majority using one-shot methods. AutoFormer (Chen et al., 2021c) searches over +vision transformer architectures and hyperparameters using a single-path-one-shot strategy +(Guo et al., 2020b) and then running evolutionary search on the trained supernet. +A +followup work, AutoFormerv2 (Chen et al., 2021d), automated the design of the search +space itself by gradually evolving different search dimensions. Other works have improved +31 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +supernet training via gradient conflict aware training (Gong et al., 2021) or channel-aware +training (Su et al., 2021). Finally, Li et al. (2021a) and Ding et al. (2021a) run one-shot +methods on hybrid CNN and transformer search spaces for computer vision. +8. Benchmarks +In the early days of NAS research, the most popular metrics were the final test accuracies +on CIFAR-10 and ImageNet. This caused inconsistent search spaces and training pipelines +across papers, and also drove up computational costs. For example, it became standard +to train the final architecture for 600 epochs, even though the test accuracy only increases +by a fraction of a percent past 200 epochs. Recently, queryable NAS benchmarks have +helped the field reduce computation when developing NAS techniques and to achieve fair, +statistically significant comparisons between methods. +A NAS benchmark (Lindauer and Hutter, 2020) is defined as a dataset with a fixed +train-test split, a search space, and a fixed evaluation pipeline for training the architectures. +A tabular NAS benchmark is one that additionally gives precomputed evaluations for all +possible architectures in the search space. A surrogate NAS benchmark is a NAS benchmark +along with a surrogate model that can be used to predict the performance of any architecture +in the search space. A NAS benchmark is queryable if it is either a tabular or a surrogate +benchmark. Queryable NAS benchmarks can be used to efficiently simulate many NAS +experiments using only a CPU, by querying the performance of neural networks from the +benchmark, rather than training them from scratch. In the rest of the section, we give an +overview of popular NAS benchmarks. See Appendix Table 2 for a summary. +The first tabular NAS benchmark was NAS-Bench-101 (Ying et al., 2019). It consists +of a cell-based search space of 423 624 architectures, each with precomputed validation +and test accuracies on CIFAR-10 for three different seeds. A follow-up work, NAS-Bench- +1Shot1 (Zela et al., 2020b), is able to simulate one-shot algorithms by defining subsets of the +NAS-Bench-101 search space which have a fixed number of nodes. NAS-Bench-201 (Dong +and Yang, 2020) is another popular tabular NAS benchmark, consisting of 6466 unique +architectures, each with precomputed validation and test accuracies on CIFAR-10, CIFAR- +100, and ImageNet-16-120 for three seeds each. NATS-Bench (Dong et al., 2021b) is an +extension of NAS-Bench-201 which also includes a macro search space. Another extension, +HW-NAS-Bench-201 (Li et al., 2021b), gives the measured or estimated hardware cost for +all architectures across six hardware devices. +Surr-NAS-Bench-DARTS (formerly called NAS-Bench-301) (Siems et al., 2020) was the +first surrogate NAS benchmark, created by training 60 000 architecture from the DARTS +(Liu et al., 2019c) search space on CIFAR-10 and then training a surrogate model. The +authors also released Surr-NAS-Bench-FBNet for the FBNet search space (Wu et al., 2019b). +A follow-up work, NAS-Bench-x11 (Yan et al., 2021b), devised a technique to predict the +full learning curve, allowing the validation accuracies to be queried at arbitrary epochs, +which is necessary for simulating multi-fidelity NAS algorithms. +TransNAS-Bench-101 (Duan et al., 2021) is a tabular benchmark that covers seven dif- +ferent computer vision tasks from the Taskonomy dataset (Zamir et al., 2018). Beyond +computer vision, NAS-Bench-NLP (Klyuchnikov et al., 2022) consists of an LSTM-inspired +search space for NLP, and NAS-Bench-ASR (Mehrotra et al., 2021) is a tabular NAS bench- +32 + +Neural Architecture Search: Insights from 1000 Papers +mark for automatic speech recognition (Garofolo, 1993). NAS-Bench-360 (Tu et al., 2022a) +is a benchmark suite which gives NAS benchmarks on ten diverse problems such as pros- +thetics control, PDE solving, protein folding, and astronomy imaging, and is search space +agnostic, although three of the tasks have pretrained architectures on the NAS-Bench-201 +search space. Finally, NAS-Bench-Suite (Mehta et al., 2022) is a benchmark suite which +combines the majority of existing queryable NAS benchmarks, 28 total tasks, into a single +unified interface. An extension, NAS-Bench-Suite-Zero, offers precomputed zero-cost proxy +values across all tasks (Krishnakumar et al., 2022). +Using queryable benchmarks allows researchers to easily simulate hundreds of trials of +the algorithms with different initial random seeds, making it easy to report statistically +significant comparisons. However, over-reliance on a few benchmarks can lead to the field +over-fitting (Koch et al., 2021; Raji et al., 2021) and is not conducive to the discovery of truly +novel methods. Therefore, researchers should use a large set of diverse NAS benchmarks +whenever possible. +9. Best Practices +The field of NAS has at times seen problems with reproducibility and fair, statistically +significant comparisons among methods. These issues impede the overall research progress +in the field of NAS. Recently, a few papers have laid out best practices and guidelines for +conducting sound NAS research that is reproducible and makes fair comparisons (Li and +Talwalkar, 2019; Lindauer and Hutter, 2020; Yang et al., 2020). These best practices are +also available as a checklist (Lindauer and Hutter, 2020). We encourage NAS researchers +to follow the checklist and to attach it to the appendix of their papers. Now, we summarize +these best practices for NAS research. +9.1 Releasing Code and Important Details +It is nearly impossible to reproduce NAS methods without the full code. Even then, random +seeds should be specified and reported. Furthermore, releasing easy-to-use code can lead to +more follow-up methods and impact. For example, Liu et al. (2019c) released easy-to-use +code for DARTS, which facilitated numerous follow-up works. +When releasing code, it is important to release all components, including the training +pipeline(s), search space, hyperparameters, random seeds, and the NAS method. Many +papers use different architecture training pipelines during the search and during the final +evaluation, so it is important to include both. Note that using popular NAS benchmarks +such as NAS-Bench-101 or NAS-Bench-201 (see Section 8) makes this substantially easier: +the training pipeline is already fixed. +NAS methods often have several moving parts. As a result, they typically have many +hyperparameters of their own that could be tuned. In fact, many NAS methods themselves +make use of neural networks – one could even run a NAS algorithm on the NAS algorithm! +Due to this complexity, it is important to report if, or how, these hyperparameters were +tuned. When reporting results on a large set of search spaces and datasets, the best practice +is to tune the hyperparameters of the NAS method on one dataset, and then fix these +hyperparameters for the remaining evaluations on other datasets. We also note that, in +general, devising NAS methods with fewer hyperparameters is more desirable, especially +33 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +because it has recently been shown that hyperparameters often do not transfer well across +datasets and search spaces (Mehta et al., 2022). +9.2 Comparing NAS Methods +When comparing NAS methods, it is not enough to use the same datasets. The exact same +NAS benchmarks must be used: a dataset with a fixed train-test split, search space, and +evaluation pipeline. Otherwise, it is unclear whether a difference in performance is due to +the NAS algorithm or the training pipeline. +Several papers have shown that simple baselines are competitive with state-of-the-art +NAS algorithms (Li and Talwalkar, 2019; Ottelander et al., 2021; Sciuto et al., 2020; White +et al., 2021b). When desigining a new method for NAS, it is important to compare the +method with baselines such as random sampling and random search. Furthermore, many +NAS methods are anytime algorithms: a time budget does not necessarily need to be spec- +ified upfront, and the method can be stopped at any time, returning the best architecture +found so far. The longer the NAS method runs, the better the final result. These NAS +methods should be compared on a plot of performance over time. Even one-shot algorithms +can be compared in this way, since the supernet can be discretized and trained at any point. +We recommend that NAS researchers run thorough ablation studies to show which +part(s) of the NAS method lead to the most improved performance. As mentioned in the +previous section, NAS methods often have several moving parts, so a clean understanding of +the importance of each part and how they work together, is important to report. Finally, we +recommend that researchers run multiple trials of their experiments and report the random +seeds for each experiment. NAS methods can have high variance in the randomness of the +algorithm, so running many trials is important to verify statistically significant comparisons. +10. Resources +In this section, we discuss NAS resources including libraries (Section 10.1), other survey +papers (Section 10.2), and additional resources (Section 10.3). +10.1 Libraries +A long line of engineering has been focused on automating machine learning pipelines: Auto- +WEKA (Thornton et al., 2013), Auto-Sklearn (Feurer et al., 2015), TPOT (Olson et al., +2016), and AutoGluon-Tabular (Erickson et al., 2020). More recently, a special focus has +been given to developing tools that can facilitate the deployment of various NAS algorithms +for practitioners, such as Auto-Keras (Jin et al., 2019a), Auto-PyTorch Tabular (Zimmer +et al., 2021), AutoGluon (Erickson et al., 2020), and NNI (Microsoft, 2021). +To provide a toolbox for facilitating NAS research, in both developing new NAS meth- +ods and applying NAS to new problem domains, various libraries have been proposed. The +DeepArchitect library (Negrinho and Gordon, 2017), which separates the search space from +the optimizer, was an important first step towards this direction in the NAS community. +NASLib (Ruchte et al., 2020) unifies and simplifies NAS research by having a single ab- +straction for one-shot and BBO algorithms, and a single abstraction for the search spaces +of nearly all queryable NAS benchmark. Archai (Hu et al., 2019) also provides unified ab- +34 + +Neural Architecture Search: Insights from 1000 Papers +stractions for one-shot and discrete NAS algorithms. The aim for Archai is both to support +reproducible rapid prototyping for NAS research as well as to be a turnkey solution for +data scientists looking to try NAS on their tasks. PyGlove (Peng et al., 2020) introduced a +novel approach to constructing NAS methods via symbolic programming, in which the ML +programs are mutable and can be manipulated and processed by other programs. +10.2 Other NAS Survey Papers +There are several older NAS survey papers. +Elsken et al. (2019b) provides a compact +introduction to NAS and introduces the “three pillars” of NAS: search space, search strategy, +and performance evaluation strategy. The survey by Wistuba et al. (2019) provides a more +comprehensive view of the landscape of NAS research, unifying and categorizing existing +methods. Ren et al. (2020) gave a layout that focused on the historical challenges in the +field of NAS, as well as the solutions found to remedy these challenges. +Other surveys have been released which focus on a specific sub-area of NAS. Liu et al. +(2021a) focus on evolutionary NAS, Benmeziane et al. (2021) focus on hardware-aware NAS +(HW-NAS), Zhang et al. (2021b) survey AutoML (with a NAS focus) on graphs, Elsken +et al. (2022) survey NAS for dense prediction in computer vision, and Xie et al. (2021), +Santra et al. (2021), and Cha et al. (2022) all survey one-shot NAS methods. +Finally, there are more survey papers with a broader focus such as automated machine +learning (AutoML) or automated deep learning (AutoDL), which devote a section to NAS +(Dong et al., 2021a; He et al., 2021; Kedziora et al., 2020; Yao et al., 2018; Yu and Zhu, +2020). Notably, the first book on automated machine learning (which is open-access) was +released in May 2019 by Hutter et al. (2019). +10.3 Additional Resources +There are multiple long-running workshops which focus on NAS and related topics. The +AutoML workshop at ICML (2014-2021) and Meta-Learning workshop at NeurIPS (2017- +2022) have had a healthy overlap in attendance with the NAS community, especially over +the last few years, while ICLR (2020, 2021) and CVPR (2021) have had workshops devoted +solely to NAS. Finally, after many years of AutoML and NAS workshops, the community +has grown large enough to start the first AutoML conference: https://automl.cc/. +For a continuously updated, searchable list of NAS papers, see https://www.automl. +org/automl/literature-on-neural-architecture-search/. For a continuously updated +list of NAS papers published at ML venues, as well as other resources, see https://github. +com/D-X-Y/Awesome-AutoDL. +11. Future Directions +Neural architecture search has come a long way in the last few years. The efficiency of NAS +algorithms has improved by orders of magnitude, tools exist to compare NAS algorithms +without GPUs, and researchers have created many novel techniques and diverse search +spaces. Architectures discovered by NAS constitute the state of the art on many tasks. +However, there are still many unsolved problems and promising future directions. In this +section, we discuss a few of the most important directions for future work in NAS. +35 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +11.1 Robustness of Efficient Methods +One-shot methods are one of the most popular techniques for NAS due to their orders-of- +magnitude speedups over to black-box optimization techniques. While one-shot techniques +have already seen major progress, they still face performance issues. +Even though many improvements of one-shot algorithms such as DARTS have been +proposed (see Section 4.2), these works generally focus on a single improvement; the field +lacks a large-scale, fair comparison among one-shot methods. Furthermore, as it currently +stands, applying one-shot methods to a new task requires a significant amount of expertise. +Devising one-shot approaches that work robustly and reliably across new datasets and tasks +is an important area for future study. +Another more recent set of techniques that promises orders-of-magnitude speedups are +zero-cost proxies (see Section 5.1.2). Although recent work has shown that many zero-cost +proxies do not consistently outperform simple baselines (Ning et al., 2021), other work ar- +gues that there is untapped potential for zero-cost proxies (White et al., 2022), especially +when combined with existing NAS techniques (White et al., 2021c; Xiang et al., 2021). De- +veloping a better understanding of when and why zero-cost proxies work in certain settings +is an important area for future research. +11.2 Going Beyond Hand-Crafted, Rigid Search Spaces +The search spaces for NAS methods are typically carefully hand-designed by human experts. +While carefully designing search spaces decreases search times, it also contradicts the idea +of having an automated system that can be employed by non-experts, and it limits the +scope of NAS to domains where strong search spaces are available. Furthermore, in the +last few years, the most-studied type of search space by far has been the cell-based search +space, which is significantly more rigid than other types of search spaces. +Hierarchical search spaces offer a better trade-off between flexibility and ease of search, +yet they are relatively under-explored when compared to cell-based search spaces (see Sec- +tion 2.5). Furthermore, hierarchical search spaces by nature have a higher diversity when +compared to cell-based search spaces, reducing the overall human bias of the search space. +Optimizing search spaces in an automated manner (Ru et al., 2020b) such as starting +with large, diverse search spaces and then iteratively pruning low-performing parts of the +space (Guo et al., 2020a; Radosavovic et al., 2020) could allow researchers to consider a +significantly larger variety of architectures. +11.3 Fully Automated Deep Learning +Although NAS has seen a huge amount of interest, recent work has shown that on popular +search spaces such as the DARTS search space, optimizing the training hyperparameters +leads to a greater increase in performance than optimizing the architecture (Yang et al., +2020; Zela et al., 2020b). While these results show that for some search spaces, optimizing +hyperparameters may be more important than optimizing the architecture, the best case +scenario is to optimize both hyperparameters and the architecture simultaneously. +A new thread of research seeks to simultaneously optimize the hyperparameters and +architecture: NAS + HPO (see Section 6.1). +Varying hyperparameters along with the +36 + +Neural Architecture Search: Insights from 1000 Papers +architecture also significantly reduces human bias, making it possible to discover previously +unknown combinations of architectures and hyperparameters that substantially outperform +existing methods. Therefore, while this problem is significantly more challenging than NAS +or HPO alone, the potential improvements are much higher. +Furthermore, we do not need to stop just at NAS + HPO: we can optimize the full +deep learning pipeline, including problem formulation, data processing, data augmentation, +model deployment, and continuous monitoring. In other words, the goal is to run fully auto- +mated deep learning (AutoDL) (Dong et al., 2021a). As the field of NAS matures, AutoDL +has the potential to play a big role in realizing substantial improvements in performance +for real-world problems. +Acknowledgments and Disclosure of Funding +This research was partially supported by TAILOR, a project funded by EU Horizon 2020 +research and innovation programme under GA No 952215. We acknowledge funding by +European Research Council (ERC) Consolidator Grant “Deep Learning 2.0” (grant no. +101045765). Funded by the European Union. Views and opinions expressed are however +those of the author(s) only and do not necessarily reflect those of the European Union or +the ERC. Neither the European Union nor the ERC can be held responsible for them. +37 + +Fundedby +the European UnionWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Operation Layer/Unit/Primitive +3x3 depthwise- +separable convolution +Inverted bottleneck +residual layer +1x1 convolution +Block/Module +Architecture +Cell +Motif +hi+1 +hi-1 +hi +Input +Block/Cell 1 +Output +Block/Cell 2 +Block/Cell K +... +hi +hi+1 +Figure 9: NAS search space terminology. Operation layers/units/primitives consist of sets +of 1-3 operations. A block/module denotes a sequential stack of layers in chain- +structured or macro search spaces. A cell denotes a directed acyclic graph of +operations (and a motif denotes a small subset of the cell). +Architecture +Input +CNN Block +Output +CNN Block +CNN Block +CNN Block +CNN Block +hi +op. layer +op. layer +op. layer +op. layer +hi+1 +Block Depth Nlayers +Expansion Ratio +Kernel size Ratio +Chain-Structured Search Space +Where to +Doubling +Channels +Macro Search Space +Architecture Depth +Nblocks +Architecture +Input +Output +op. layer +op. layer +op. layer +op. layer +op. layer +op. layer +op. layer +op. layer +Where to +Down- +sampling +Figure 10: Illustration of macro search space based on Borsos et al. (2019)(left) and chain- +structured search space based on Cai et al. (2020)(right). +A. Additional Figures and Tables +For a visualization of the search space terminologies, see Figure 9. In Figure 10, we show +chain-structured and macro search spaces. Architecture encodings are illustrated in Figure +11. Finally, for an overview of NAS benchmarks, see Table 2. +38 + +Neural Architecture Search: Insights from 1000 Papers +in +1x1 +out +3x3 +in +1x1 +out +in +out +MP +1x1 +in +out +3x3 +MP +1x1 +... +One-hot +6 +4 +1 +47 +Categorical +(a) +(c) +(b) +in +1x1 +out +3x3 +3x3 +MP +1x1 +... ++ +... +3x3 +MP +1x1 +1x1 +3 +2 +1 +... 21 +in +MP +3x3 +1x1 +3x3 +1x1 +out +in +MP 3x3 1x1 3x3 1x1 out +9 ++ +... +3x3 +MP +1x1 +1x1 +One-hot +Categorical +Figure 11: A neural architecture (a) can be encoded using an adjacency matrix (b) or +path-based representation (c), with a one-hot or categorical encoding. +Queryable +Benchmark +Size +Type +Tab. +Surr. +LCs +One-Shot +Task +#Tasks +NAS-Bench-101 +423k +cell + +Image class. +1 +NATS-Bench-TSS +(NAS-Bench-201) +6k +cell + + + +Image class. +3 +NATS-Bench-SSS +32k +macro + + + +Image class. +3 +NAS-Bench-NLP +> 1053 +cell + +NLP +1 +NAS-Bench-1Shot1 +364k +cell + + +Image class. +1 +Surr-NAS-Bench-DARTS +(NAS-Bench-301) +1018 +cell + + +Image class. +1 +Surr-NAS-Bench-FBNet +1021 +chain + +Image class. +1 +NAS-Bench-ASR +8k +cell + + +ASR +1 +TransNAS-Bench-101-Micro +4k +cell + + + +Var. CV +7 +TransNAS-Bench-101-Macro +3k +macro + + + +Var. CV +7 +NAS-Bench-111 +423k +cell + + +Image class. +1 +NAS-Bench-311 +1018 +cell + + + +Image class. +1 +NAS-Bench-NLP11 +> 1053 +cell + + +NLP +1 +NAS-Bench-MR +1023 +cell + + +Var. CV +9 +NAS-Bench-Macro +6k +macro + + +Image class. +1 +HW-NAS-Bench-201 +6k +cell + +Image class. +3 +HW-NAS-Bench-FBNet +1021 +chain + +Image class. +1 +NAS-Bench-360 +Var. +suite + + + +Var. +3 +NAS-Bench-Suite +Var. +suite + + + + +Var. +25 +NAS-Bench-Suite-Zero +Var. +suite + + + + +Var. +28 +Table 2: An overview of NAS benchmarks. +39 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +References +Mohamed S Abdelfattah, Abhinav Mehrotra, �Lukasz Dudziak, and Nicholas Donald Lane. +Zero-cost proxies for lightweight {nas}. In Proceedings of the International Conference +on Learning Representations (ICLR), 2021. +Abdulaziz Almalaq and Jun Jason Zhang. Evolutionary deep learning-based energy con- +sumption prediction for buildings. ieee access, 7:1520–1531, 2018. +Peter J Angeline, Gregory M Saunders, and Jordan B Pollack. An evolutionary algorithm +that constructs recurrent neural networks. IEEE transactions on Neural Networks, 5(1): +54–65, 1994. +Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, and Xiaoning Qian. +Nads: Neural architecture distribution search for uncertainty awareness. In Proceedings +of the International Conference on Machine Learning (ICML), pages 356–366. PMLR, +2020. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by +jointly learning to align and translate. Proceedings of the International Conference on +Learning Representations (ICLR), 2015. arXiv preprint arXiv:1409.0473. +Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network +architectures using reinforcement learning. In Proceedings of the International Conference +on Learning Representations (ICLR), 2017. +Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating neural archi- +tecture search using performance prediction. In Meta-Learning Workshop at NeurIPS, +2018. +Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. +Understanding and simplifying one-shot architecture search. In Proceedings of the Inter- +national Conference on Machine Learning (ICML), 2018. +Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kinder- +mans, and Quoc V. Le. Can weight sharing outperform random architecture search? +an investigation with tunas. In Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition (CVPR), June 2020. +Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar, Martin Wis- +tuba, and Naigang Wang. A Comprehensive Survey on Hardware-Aware Neural Architec- +ture Search. PhD thesis, LAMIH, Universit´e Polytechnique des Hauts-de-France, 2021. +James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper- +parameter optimization. In Proceedings of the Annual Conference on Neural Information +Processing Systems (NeurIPS), 2011. +Kaifeng Bi, Changping Hu, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian. Stabilizing +darts with amended gradient estimation on architectural parameters. +arXiv preprint +arXiv:1910.11831, 2019. +40 + +Neural Architecture Search: Insights from 1000 Papers +Zal´an Borsos, Andrey Khorlin, and Andrea Gesmundo. Transfer nas: Knowledge trans- +fer between search spaces with transformer agents. 6th ICML Workshop on Automated +Machine Learning, arXiv preprint arXiv:1906.08102, 2019. +Andrew Brock, Theo Lim, JM Ritchie, and Nick Weston. Smash: One-shot model archi- +tecture search through hypernetworks. In Proceedings of the International Conference on +Learning Representations (ICLR), 2018. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla +Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. +Language models are few-shot learners. Proceedings of the Annual Conference on Neural +Information Processing Systems (NeurIPS), 33:1877–1901, 2020. +Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, +Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon +Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computa- +tional Intelligence and AI in games, 4(1):1–43, 2012. +Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture +search by network transformation. In Proceedings of the AAAI Conference on Artificial +Intelligence (AAAI), 2018a. +Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-Level Network +Transformation for Efficient Architecture Search. +In Proceedings of the International +Conference on Machine Learning (ICML), 2018b. +Han Cai, Ligeng Zhu, and Song Han. +Proxylessnas: Direct neural architecture search +on target task and hardware. Proceedings of the International Conference on Learning +Representations (ICLR), 2019. +Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han. Once-for-all: Train +one network and specialize it for efficient deployment. In Proceedings of the International +Conference on Learning Representations (ICLR), 2020. +Stephen Cha, Taehyeon Kim, Hayeon Lee, and Se-Young Yun. Supernet in neural architec- +ture search: A taxonomic survey. arXiv preprint arXiv:2204.03916, 2022. +William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, attend and spell: A +neural network for large vocabulary conversational speech recognition. In 2016 IEEE in- +ternational conference on acoustics, speech and signal processing (ICASSP), pages 4960– +4964. IEEE, 2016. +Bo Chen, Golnaz Ghiasi, Hanxiao Liu, Tsung-Yi Lin, Dmitry Kalenichenko, Hartwig Adam, +and Quoc V. Le. +Mnasfpn: Learning latency-aware pyramid architecture for object +detection on mobile devices. In Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition (CVPR), June 2020. +Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and +Wanli Ouyang. Glit: Neural architecture search for global and local image transformer. +41 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages +12–21, 2021a. +Hanlin Chen, Ming Lin, Xiuyu Sun, and Hao Li. NAS-bench-zero: A large scale dataset for +understanding zero-shot neural architecture search, 2022. URL https://openreview. +net/forum?id=hP-SILoczR. +Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian +Schroff, Hartwig Adam, and Jon Shlens. Searching for efficient multi-scale architectures +for dense image prediction. In Proceedings of the Annual Conference on Neural Informa- +tion Processing Systems (NeurIPS), 2018. +Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. +One-shot neural ensem- +ble architecture search by diversity-guided search space shrinking. +Proceedings of the +IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages +16525–16534, 2021b. +Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling. Autoformer: Searching trans- +formers for visual recognition. In Proceedings of the IEEE/CVF International Conference +on Computer Vision, pages 12270–12280, 2021c. +Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, +and Haibin Ling. Searching the search space of vision transformer. Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021d. +Tianqi Chen, Ian J. Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via +knowledge transfer. In Proceedings of the International Conference on Learning Repre- +sentations (ICLR), 2016. +Wuyang Chen, Xinyu Gong, and Zhangyang Wang. Neural architecture search on imagenet +in four gpu hours: A theoretically inspired perspective. Proceedings of the International +Conference on Learning Representations (ICLR), 2021e. arXiv preprint arXiv:2102.11535. +Xiangning Chen and Cho-Jui Hsieh. +Stabilizing differentiable architecture search via +perturbation-based regularization. +In Proceedings of the International Conference on +Machine Learning (ICML), pages 1554–1565. PMLR, 2020. +Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh. Dr- +nas: Dirichlet neural architecture search. In Proceedings of the International Conference +on Learning Representations (ICLR), 2021f. +Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: +Bridging the depth gap between search and evaluation. In Proceedings of the IEEE/CVF +International Conference on Computer Vision, pages 1294–1303, 2019a. +Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun. +Detnas: Backbone search for object detection. In Proceedings of the Annual Conference +on Neural Information Processing Systems (NeurIPS), 2019b. +42 + +Neural Architecture Search: Insights from 1000 Papers +Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, and Arun K Somani. +Neural architecture search for transformers: A survey. IEEE Access, 2022. +Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Ben- +gio. Attention-based models for speech recognition. Proceedings of the Annual Conference +on Neural Information Processing Systems (NeurIPS), 28, 2015. +Aristeidis Chrostoforidis, George Kyriakides, and Konstantinos Margaritis. +A novel +evolutionary algorithm for hierarchical neural architecture search. +arXiv preprint +arXiv:2107.08484, 2021. +Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. Fair darts: Eliminating unfair +advantages in differentiable architecture search. +In European conference on computer +vision, pages 465–480. Springer, 2020. +Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan. Darts- +: robustly stepping out of performance collapse without indicators. Proceedings of the +International Conference on Learning Representations (ICLR), 2021. +arXiv preprint +arXiv:2009.01027. +Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. +Electra: +Pre-training text encoders as discriminators rather than generators. Proceedings of the +International Conference on Learning Representations (ICLR), 2020. +arXiv preprint +arXiv:2003.10555. +Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training +deep neural networks with binary weights during propagations. Advances in neural in- +formation processing systems, 28, 2015. +Dennis D Cox and Susan John. A statistical method for global optimization. In [Proceedings] +1992 IEEE International Conference on Systems, Man, and Cybernetics, pages 1241– +1246. IEEE, 1992. +Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, +Yuandong Tian, Matthew Yu, Peter Vajda, et al. Fbnetv3: Joint architecture-recipe +search using predictor pretraining. In Proceedings of the IEEE/CVF Conference on Com- +puter Vision and Pattern Recognition (CVPR), pages 16276–16285, 2021. +Tri Dao, Nimit Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, +Atri Rudra, and Christopher R´e. Kaleidoscope: An efficient, learnable representation for +all structured linear maps. In Proceedings of the International Conference on Learning +Representations (ICLR), 2020. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of +deep bidirectional transformers for language understanding. In Proceedings of NAACL- +HLT, 2019. +Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, and Ping +Luo. +Hr-nas: Searching efficient high-resolution neural architectures with lightweight +43 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +transformers. +In Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition, pages 2982–2992, 2021a. +Yuhui Ding, Quanming Yao, Huan Zhao, and Tong Zhang. Diffmg: Differentiable meta +graph search for heterogeneous graph neural networks. In Proceedings of the 27th ACM +SIGKDD Conference on Knowledge Discovery & Data Mining, pages 279–288, 2021b. +Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. +Speeding up automatic +hyperparameter optimization of deep neural networks by extrapolation of learning curves. +In The International Joint Conference on Artificial Intelligence (IJCAI), 2015. +Xuanyi Dong and Yi Yang. Searching for a robust neural architecture in four gpu hours. In +Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition +(CVPR), 2019. +Xuanyi Dong and Yi Yang. Nas-bench-201: Extending the scope of reproducible neural +architecture search. In Proceedings of the International Conference on Learning Repre- +sentations (ICLR), 2020. +Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, and Quoc V +Le. +Autohas: +Efficient hyperparameter and architecture search. +arXiv preprint +arXiv:2006.03656, 2020. +Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys. Automated +deep learning: Neural architecture search is not the end. arXiv preprint arXiv:2112.09245, +2021a. +Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys. Nats-bench: Benchmarking +nas algorithms for architecture topology and size. IEEE Transactions on Pattern Analysis +and Machine Intelligence, 2021b. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, +Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain +Gelly, et al. +An image is worth 16x16 words: Transformers for image recognition at +scale. Proceedings of the International Conference on Learning Representations (ICLR), +2021. arXiv preprint arXiv:2010.11929. +Sivan Doveh and Raja Giryes. +Degas: differentiable efficient generator search. +Neural +Computing and Applications, 33(24):17173–17184, 2021. +Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V. +Le, and Xiaodan Song. +Spinenet: Learning scale-permuted backbone for recognition +and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition (CVPR), June 2020. +Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, and Zhen- +guo Li. Transnas-bench-101: Improving transferability and generalizability of cross-task +neural architecture search. In Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition (CVPR), pages 5251–5260, 2021. +44 + +Neural Architecture Search: Insights from 1000 Papers +Lukasz Dudziak, Thomas Chau, Mohamed Abdelfattah, Royson Lee, Hyeji Kim, and +Nicholas Lane. Brp-nas: Prediction-based nas using gcns. In Proceedings of the An- +nual Conference on Neural Information Processing Systems (NeurIPS), 2020. +Thomas Elsken, Jan-Hendrik Metzen, and Frank Hutter. Simple and efficient architecture +search for convolutional neural networks. arXiv preprint arXiv:1711.04528, 2017. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural ar- +chitecture search via lamarckian evolution. In Proceedings of the International Conference +on Learning Representations (ICLR), 2019a. +Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A +survey. In JMLR, 2019b. +Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter. Meta-learning +of neural architectures for few-shot learning. In CVPR, 2020. +Thomas Elsken, Arber Zela, Jan Hendrik Metzen, Benedikt Staffler, Thomas Brox, Abhi- +nav Valada, and Frank Hutter. Neural architecture search for dense prediction tasks in +computer vision, 2022. +Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and +Alexander Smola. Autogluon-tabular: Robust and accurate automl for structured data. +arXiv preprint arXiv:2003.06505, 2020. +Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter +optimization at scale. In Proceedings of the International Conference on Machine Learning +(ICML), 2018. +Jiemin Fang, Yuzhu Sun, Kangjian Peng, Qian Zhang, Yuan Li, Wenyu Liu, and Xing- +gang Wang. +Fast neural network adaptation via parameter remapping and architec- +ture search. In Proceedings of the International Conference on Learning Representations +(ICLR), 2020. +M. Feurer, A. Klein, K. Eggensperger, J. T. Springenberg, M. Blum, and F. Hutter. Efficient +and robust automated machine learning. In Proceedings of the Annual Conference on +Neural Information Processing Systems (NeurIPS), pages 2962–2970, 2015. +Matthias Feurer and Frank Hutter. Hyperparameter optimization. In Hutter et al. (2019), +pages 3–38. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. +Model-agnostic meta-learning for fast +adaptation of deep networks. In Proceedings of the International Conference on Machine +Learning (ICML), 2017. +Dario Floreano, Peter D¨urr, and Claudio Mattiussi. Neuroevolution: from architectures to +learning. Evolutionary intelligence, 1(1):47–62, 2008. +Peter I Frazier. A tutorial on bayesian optimization. stat, 1050:8, 2018. +45 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, and Zhangyang Wang. +Autogan-distiller: searching to compress generative adversarial networks. In Proceedings +of the International Conference on Machine Learning (ICML), pages 3292–3303, 2020. +Saya Fujino, Naoki Mori, and Keinosuke Matsumoto. +Deep convolutional networks for +human sketches by means of the evolutionary deep learning. In 2017 Joint 17th World +Congress of International Fuzzy Systems Association and 9th International Conference +on Soft Computing and Intelligent Systems (IFSA-SCIS), pages 1–5. IEEE, 2017. +Vayangi Vishmi Vishara Ganepola and Torin Wirasingha. Automating generative adversar- +ial networks using neural architecture search: A review. In 2021 International Conference +on Emerging Smart Computing and Informatics (ESCI), pages 577–582. IEEE, 2021. +Chen Gao, Yunpeng Chen, Si Liu, Zhenxiong Tan, and Shuicheng Yan. Adversarialnas: Ad- +versarial neural architecture search for gans. In Proceedings of the IEEE/CVF Conference +on Computer Vision and Pattern Recognition (CVPR), pages 5680–5689, 2020a. +Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu. Graph neural architec- +ture search. In The International Joint Conference on Artificial Intelligence (IJCAI), +volume 20, pages 1403–1409, 2020b. +Roman Garnett. Bayesian Optimization. Cambridge University Press, 2023. to appear. +John S Garofolo. Timit acoustic phonetic continuous speech corpus. Linguistic Data Con- +sortium, 1993, 1993. +Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V. Le. Nas-fpn: Learning scalable feature pyramid +architecture for object detection. +In The IEEE Conference on Computer Vision and +Pattern Recognition (CVPR), June 2019. +Spencer Gibb, Hung Manh La, and Sushil Louis. A genetic algorithm for convolutional +network structure optimization for concrete crack detection. In 2018 IEEE Congress on +Evolutionary Computation (CEC), pages 1–8. IEEE, 2018. +R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate +object detection and semantic segmentation. In 2014 IEEE Conference on Computer +Vision and Pattern Recognition, pages 580–587, 2014. +David E Goldberg and Kalyanmoy Deb. A comparative analysis of selection schemes used in +genetic algorithms. In Foundations of genetic algorithms, volume 1, pages 69–93. Elsevier, +1991. +Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Vikas +Chandra, et al. Nasvit: Neural architecture search for efficient vision transformers with +gradient conflict aware supernet training. In International Conference on Learning Rep- +resentations, 2021. +Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang. Autogan: Neural archi- +tecture search for generative adversarial networks. +In Proceedings of the IEEE/CVF +International Conference on Computer Vision, pages 3224–3234, 2019. +46 + +Neural Architecture Search: Insights from 1000 Papers +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil +Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Proceedings of +the Annual Conference on Neural Information Processing Systems (NeurIPS), 27, 2014. +Li Guilin, Zhang Xing, Wang Zitong, Li Zhenguo, and Zhang Tong. Stacnas: Towards stable +and consistent optimization for differentiable neural architecture search. +Openreview +submission https://openreview.net/forum?id=rygpAnEKDH, 2019. +Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C +Courville. +Improved training of wasserstein gans. Proceedings of the Annual Confer- +ence on Neural Information Processing Systems (NeurIPS), 30, 2017. +Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao +Chen, and Chang Xu. Hit-detector: Hierarchical trinity architecture search for object +detection. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition +(CVPR), June 2020a. +Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian +Sun. Single path one-shot neural architecture search with uniform sampling. In European +Conference on Computer Vision, pages 544–560. Springer, 2020b. +David Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. In Proceedings of the International +Conference on Learning Representations (ICLR), 2017. +Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan +Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling +up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014. +K. He, X. Zhang, S. Ren, and J. Sun. +Spatial pyramid pooling in deep convolutional +networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine +Intelligence, 37(9):1904–1916, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. +Deep residual learning for +image recognition. In Proceedings of the IEEE conference on computer vision and pattern +recognition, pages 770–778, 2016a. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. +Deep residual learning for +image recognition. In Proceedings of the IEEE conference on computer vision and pattern +recognition, pages 770–778, 2016b. +Xin He, Kaiyong Zhao, and Xiaowen Chu. +Automl: A survey of the state-of-the-art. +Knowledge-Based Systems, 212:106622, 2021. +Philipp Hennig and Christian J Schuler. Entropy search for information-efficient global +optimization. Journal of Machine Learning Research, 13(Jun):1809–1837, 2012. +Jos´e Miguel Hern´andez-Lobato, Matthew W Hoffman, and Zoubin Ghahramani. Predictive +entropy search for efficient global optimization of black-box functions. In Proceedings +of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages +918–926, 2014. +47 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp +Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilib- +rium. Proceedings of the Annual Conference on Neural Information Processing Systems +(NeurIPS), 30, 2017. +Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, +9(8):1735–1780, 1997. +Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gra- +dient descent. +In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, Artificial +Neural Networks — ICANN 2001, pages 87–94, Berlin, Heidelberg, 2001. Springer Berlin +Heidelberg. +Noah Hollmann, Samuel M¨uller, Katharina Eggensperger, and Frank Hutter. Tabpfn: A +transformer that solves small tabular classification problems in a second. arXiv preprint +arXiv:2207.01848, 2022. +T. M. Hospedales, A. Antoniou, P. Micaelli, and A. J. Storkey. Meta-learning in neural +networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, +2021. +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, To- +bias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional +neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, and De- +badeepta Dey. Efficient forward architecture search. In Proceedings of the Annual Con- +ference on Neural Information Processing Systems (NeurIPS), 2019. +Shou-Yong Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, and +Dahua Lin. Dsnas: Direct neural architecture search without parameter retraining. 2020 +IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages +12081–12089, 2020. +Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely +connected convolutional networks. In Proceedings of the IEEE Conference on Computer +Vision and Pattern Recognition (CVPR), July 2017. +Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Sequential model-based optimiza- +tion for general algorithm configuration. In Proceedings of the 5th International Confer- +ence on Learning and Intelligent Optimization, LION’05, page 507–523, Berlin, Heidel- +berg, 2011. Springer-Verlag. ISBN 9783642255656. doi: 10.1007/978-3-642-25566-3 40. +URL https://doi.org/10.1007/978-3-642-25566-3_40. +Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, editors. Automated Machine Learn- +ing: Methods, Systems, Challenges. Springer, 2019. +Carl Hvarfner, Frank Hutter, and Luigi Nardi. Joint entropy search for maximally-informed +bayesian optimization. In Proceedings of the Annual Conference on Neural Information +Processing Systems (NeurIPS), 2022. +48 + +Neural Architecture Search: Insights from 1000 Papers +Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns, Guilherme Miotto, Simon Schrodi, Andr´e +Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter. Bag of +baselines for multi-objective joint neural architecture search and hyperparameter opti- +mization. In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021. +Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence +and generalization in neural networks. Proceedings of the Annual Conference on Neural +Information Processing Systems (NeurIPS), 31, 2018. +Kevin Jamieson and Ameet Talwalkar. Non-stochastic best arm identification and hyper- +parameter optimization. +In Proceedings of the International Conference on Artificial +Intelligence and Statistics (AISTATS), 2016. +Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz Lukasz Religa, Caio Ce- +sar Teodoro Mendes, Gustavo Henrique de Rosa, Sebastien Bubeck, Farinaz Koushanfar, +and Debadeepta Dey. Litetransformersearch: Training-free on-device search for efficient +autoregressive language models. In Proceedings of the Annual Conference on Neural In- +formation Processing Systems (NeurIPS), 2022. +Shengli Jiang and Prasanna Balaprakash. Graph neural network architecture search for +molecular property prediction. In 2020 IEEE International Conference on Big Data (Big +Data), pages 1346–1353. IEEE, 2020. +Haifeng Jin, Qingquan Song, and Xia Hu. Auto-keras: An efficient neural architecture search +system. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge +Discovery & Data Mining, 2019a. +Haifeng Jin, Qingquan Song, and Xia Hu. +Auto-keras: An efficient neural architecture +search system. In Proceedings of the 25th ACM SIGKDD International Conference on +Knowledge Discovery & Data Mining, pages 1946–1956. ACM, 2019b. +Donald R Jones, Matthias Schonlau, and William J Welch. Efficient global optimization of +expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998. +Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka. Regularization is all +you need: Simple neural nets can excel on tabular data. arXiv preprint arXiv:2106.11189, +2021. +Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnab´as P´oczos. Multi- +fidelity Bayesian optimisation with continuous approximations. +In Proceedings of the +International Conference on Machine Learning (ICML), 2017. +Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P +Xing. +Neural architecture search with bayesian optimisation and optimal transport. +In Proceedings of the Annual Conference on Neural Information Processing Systems +(NeurIPS), 2018. +David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys. Autonoml: Towards an +integrated framework for autonomous machine learning. arXiv preprint arXiv:2012.12600, +2020. +49 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Hiroaki Kitano. Designing neural networks using genetic algorithms with graph generation +system. Complex systems, 4(4):461–476, 1990. +Jyrki Kivinen and Manfred K Warmuth. Exponentiated gradient versus gradient descent +for linear predictors. information and computation, 132, 1997. +Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Learning curve +prediction with bayesian neural networks. In Proceedings of the International Conference +on Learning Representations (ICLR), 2017. +Aaron Klein, Louis Tiao, Thibaut Lienart, Cedric Archambeau, and Matthias Seeger. +Model-based asynchronous hyperparameter and neural architecture search. arXiv preprint +arXiv:2003.10865, 2020. +Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, +Alexander Filippov, and Evgeny Burnaev. +Nas-bench-nlp: neural architecture search +benchmark for natural language processing. IEEE Access, 10:45736–45747, 2022. +Masayuki Kobayashi and Tomoharu Nagao. A multi-objective architecture search for gen- +erative adversarial networks. In Proceedings of the 2020 Genetic and Evolutionary Com- +putation Conference Companion, pages 133–134, 2020. +Bernard Koch, Emily Denton, Alex Hanna, and Jacob G Foster. +Reduced, reused and +recycled: The life of a dataset in machine learning research. Proceedings of the Annual +Conference on Neural Information Processing Systems (NeurIPS), 2021. arXiv preprint +arXiv:2112.01716. +Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari, and Frank +Hutter. Nas-bench-suite-zero: Accelerating research on zero cost proxies. In Proceedings +of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets +and Benchmarks Track, 2022. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep +convolutional neural networks. In Proceedings of the Annual Conference on Neural In- +formation Processing Systems (NeurIPS), 2012. +David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and +Aaron Courville. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017. +Deepika Kumari and Kamaljit Kaur. A survey on stereo matching techniques for 3d vision +in image processing. Int. J. Eng. Manuf, 4:40–49, 2016. +Kevin Alexander Laube, Maximus Mutschler, and Andreas Zell. What to expect of hardware +metric predictors in NAS, 2022. URL https://openreview.net/forum?id=2DJn3E7lXu. +Yann LeCun, Patrick Haffner, L´eon Bottou, and Yoshua Bengio. Object recognition with +gradient-based learning. In Shape, contour and grouping in computer vision, 1999. +Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang. Rapid neural architecture search by +learning to generate graphs from datasets. In Proceedings of the International Conference +on Learning Representations (ICLR), 2021. +50 + +Neural Architecture Search: Insights from 1000 Papers +Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. +Set transformer: A framework for attention-based permutation-invariant neural networks. +In Proceedings of the International Conference on Machine Learning (ICML), 2019a. +Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. Snip: Single-shot network prun- +ing based on connection sensitivity. In Proceedings of the International Conference on +Learning Representations (ICLR), 2019b. +Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and +Xiaojun Chang. +Bossnas: Exploring hybrid cnn-transformers with block-wisely self- +supervised neural architecture search. In Proceedings of the IEEE/CVF International +Conference on Computer Vision, pages 12281–12291, 2021a. +Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan +Yu, Yue Wang, Cong Hao, and Yingyan Lin. {HW}-{nas}-bench: Hardware-aware neu- +ral architecture search benchmark. In Proceedings of the International Conference on +Learning Representations (ICLR), 2021b. +Guohao Li, Guocheng Qian, Itzel C Delgadillo, Matthias Muller, Ali Thabet, and Bernard +Ghanem. Sgas: Sequential greedy architecture search. In Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition (CVPR), pages 1620–1630, +2020a. +Jian Li, Yong Liu, Jiankun Liu, and Weiping Wang. Neural architecture optimization with +graph vae. arXiv preprint arXiv:2006.10310, 2020b. +Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture +search. In Uncertainty in Artificial Intelligence (UAI), 2019. +Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin +Recht, and Ameet Talwalkar. A system for massively parallel hyperparameter tuning. In +Proceedings of the Conference on Machine Learning Systems (MLSys), 2020c. +Liam Li, Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Geometry-aware +gradient algorithms for neural architecture search. In Proceedings of the International +Conference on Learning Representations (ICLR), 2021c. +Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. +Hyperband: A novel bandit-based approach to hyperparameter optimization. In JMLR, +2018. +Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, and Deming Chen. Generic neural architec- +ture search via regression. Proceedings of the Annual Conference on Neural Information +Processing Systems (NeurIPS), 34:20476–20490, 2021d. +Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, +and Shenghua Gao. Towards fast adaptation of neural architectures with meta learning. In +Proceedings of the International Conference on Learning Representations (ICLR), 2020. +51 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang, +and Zhenguo Li. Darts+: Improved differentiable architecture search with early stopping. +arXiv preprint arXiv:1909.06035, 2019. +Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong +Jin. Zen-nas: A zero-shot nas for high-performance image recognition. In Proceedings of +the IEEE/CVF International Conference on Computer Vision, pages 347–356, 2021. +Marius Lindauer and Frank Hutter. Best practices for scientific research on neural archi- +tecture search. In JMLR, 2020. +Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andr´e Biedenkapp, Difan +Deng, Carolin Benjamins, Tim Ruhkopf, Ren´e Sass, and Frank Hutter. Smac3: A versa- +tile bayesian optimization package for hyperparameter optimization. Journal of Machine +Learning Research, 2022. +Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei- +Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture +search. In Proceedings of the European Conference on Computer Vision (ECCV), pages +19–34, 2018a. +Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille, +and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image +segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition +(CVPR), June 2019a. +Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, +and Li Fei-Fei. Auto-deeplab: Hierarchical neural architecture search for semantic image +segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and +Pattern Recognition (CVPR), 2019b. +Hanxiao +Liu, +Karen +Simonyan, +Oriol +Vinyals, +Chrisantha +Fernando, +and +Koray +Kavukcuoglu. Hierarchical representations for efficient architecture search. In Proceedings +of the International Conference on Learning Representations (ICLR), 2018b. +Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. +In Proceedings of the International Conference on Learning Representations (ICLR), +2019c. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, +Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized +bert pretraining approach, 2019d. +Yuqiao Liu, Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Kay Chen Tan. A +survey on evolutionary neural architecture search. IEEE Transactions on Neural Networks +and Learning Systems, 2021a. +Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and +Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. +52 + +Neural Architecture Search: Insights from 1000 Papers +In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages +10012–10022, 2021b. +Mohammad Loni, Sima Sinaei, Ali Zoljodi, Masoud Daneshtalab, and Mikael Sj¨odin. Deep- +maker: A multi-objective optimization framework for deep neural networks in embedded +systems. Microprocessors and Microsystems, 73:102989, 2020. +Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh Dhebar, Kalyanmoy Deb, Erik Good- +man, and Wolfgang Banzhaf. Nsga-net: Neural architecture search using multi-objective +genetic algorithm. In Proceedings of the Genetic and Evolutionary Computation Confer- +ence (GECCO), 2019. +Zhichao Lu, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Bod- +deti. +Nsganetv2: +Evolutionary multi-objective surrogate-assisted neural architecture +search. In Computer Vision – ECCV 2020, pages 35–51, Cham, 2020. Springer Inter- +national Publishing. +Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, and Margret Keuper. Smooth +variational graph embeddings for efficient neural architecture search. In International +Joint Conference on Neural Networks (IJCNN), 2021. +Jovita Lukasik, Steffen Jung, and Margret Keuper. Learning where to look–generative nas +is surprisingly efficient. In The European Conference on Computer Vision (ECCV), 2022. +Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko. Scalable gradient-based +tuning of continuous regularization hyperparameters. In Proceedings of the International +Conference on Machine Learning (ICML), pages 2952–2960, 2016. +Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Enhong Chen, and Tie-Yan Liu. Semi-supervised +neural architecture search. In Proceedings of the Annual Conference on Neural Informa- +tion Processing Systems (NeurIPS), 2020. +Sebastian Lutz, Konstantinos Amplianitis, and Aljoscha Smolic. Alphagan: Generative ad- +versarial networks for natural image matting. In The British Machine Vision Conference +(BMVC), 2018. +Lizheng Ma, Jiaxu Cui, and Bo Yang. Deep neural architecture search with deep graph +bayesian optimization. In 2019 IEEE/WIC/ACM International Conference on Web In- +telligence (WI), pages 500–507. IEEE, 2019. +Matthew Mackay, Paul Vicol, Jonathan Lorraine, David Duvenaud, and Roger Grosse. Self- +tuning networks: Bilevel optimization of hyperparameters using structured best-response +functions. In Proceedings of the International Conference on Learning Representations +(ICLR), 2019. +Neeratyoy Mallik and Noor Awad. +Dehb: Evolutionary hyperband for scalable, robust +and efficient hyperparameter optimization. In The International Joint Conference on +Artificial Intelligence (IJCAI), 2021. +53 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Abhinav Mehrotra, Alberto Gil C. P. Ramos, Sourav Bhattacharya, �Lukasz Dudziak, +Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and +Nicholas Donald Lane. Nas-bench-asr: Reproducible neural architecture search for speech +recognition. In Proceedings of the International Conference on Learning Representations +(ICLR), 2021. +Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Mora- +dian, Mahmoud Safari, Kaicheng Yu, and Frank Hutter. Nas-bench-suite: Nas evaluation +is (now) surprisingly easy. In Proceedings of the International Conference on Learning +Representations (ICLR), 2022. +Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search +without training. In Proceedings of the International Conference on Machine Learning +(ICML), pages 7588–7598. PMLR, 2021. +H Mendoza, A Klein, M Feurer, J Springenberg, and F Hutter. Towards automatically- +tuned neural networks. In ICML 2016 AutoML Workshop, 2016. +Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adver- +sarial networks. In Proceedings of the International Conference on Learning Representa- +tions (ICLR), 2017. +Microsoft. Neural Network Intelligence, 2021. URL https://github.com/microsoft/nni. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed +representations of words and phrases and their compositionality. In Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), 2013. +Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde. Designing neural networks using +genetic algorithms. In ICGA, volume 89, pages 379–384, 1989. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. +Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig +Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- +maran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through +deep reinforcement learning. Nature, 518(7540):529–533, Feb 2015. +Jonas Moˇckus. On bayesian methods for seeking the extremum. In Optimization Techniques +IFIP Technical Conference, pages 400–404. Springer, 1975. +J. Pablo Mu˜noz, Nikolay Lyalyushkin, Yash Akhauri, Anastasia Senina, Alexander Kozlov, +and Nilesh Jain. Enabling NAS with automated super-network generation. AAAI 1st +International Workshop on Practical Deep Learning in the Wild, 2022. +Byunggook Na, Jisoo Mok, Hyeokjun Choe, and Sungroh Yoon. Accelerating neural archi- +tecture search via proxy data. The International Joint Conference on Artificial Intelli- +gence (IJCAI), 2021. +54 + +Neural Architecture Search: Insights from 1000 Papers +Ashwin Raaghav Narayanan, Arber Zela, Tonmoy Saikia, Thomas Brox, and Frank Hutter. +Multi-headed neural ensemble search. In Workshop on Uncertainty and Robustness in +Deep Learning (UDL@ICML‘21), 2021. +Aviv Navon, Aviv Shamsian, Gal Chechik, and Ethan Fetaya. Learning the pareto front +with hypernetworks. In Proceedings of the International Conference on Learning Repre- +sentations (ICLR), 2021. +Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik. Xnas: +Neural architecture search with expert advice. Proceedings of the Annual Conference on +Neural Information Processing Systems (NeurIPS), 32, 2019. +Renato Negrinho and Geoff Gordon. Deeparchitect: Automatically designing and training +deep architectures. stat, 1050:28, 2017. +Vladimir Nekrasov, Hao Chen, Chunhua Shen, and Ian Reid. Fast neural architecture search +of compact semantic segmentation models via auxiliary cells. In The IEEE Conference +on Computer Vision and Pattern Recognition (CVPR), June 2019. +Vu Nguyen, Tam Le, Makoto Yamada, and Michael A Osborne. Optimal transport kernels +for sequential and parallel neural architecture search. In Proceedings of the International +Conference on Machine Learning (ICML), pages 8084–8095. PMLR, 2021. +Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. +arXiv preprint, 2018. +Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, and Huazhong Yang. A generic graph- +based neural architecture encoding scheme for predictor-based nas. In European Confer- +ence on Computer Vision, pages 189–204. Springer, 2020. +Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, +and Yu Wang. Evaluating efficient performance estimators of neural architectures. Pro- +ceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), +34, 2021. +Matheus Nunes and Gisele L Pappa. Neural architecture search in graph neural networks. +In Brazilian Conference on Intelligent Systems, pages 302–317. Springer, 2020. +R. Olson, N. Bartley, R. Urbanowicz, and J. Moore. Evaluation of a Tree-based Pipeline +Optimization Tool for Automating Data Science. +In T. Friedrich, editor, Proceedings +of the Genetic and Evolutionary Computation Conference (GECCO’16), pages 485–492. +ACM, 2016. +T Den Ottelander, Arkadiy Dushatskiy, Marco Virgolin, and Peter AN Bosman. Local +search is a remarkably strong baseline for neural architecture search. In International +Conference on Evolutionary Multi-Criterion Optimization, 2021. +Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanx- +iao Liu, Adam Kraft, Chen Liang, and Quoc Le. Pyglove: Symbolic programming for +55 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +automated machine learning. In Proceedings of the Annual Conference on Neural Infor- +mation Processing Systems (NeurIPS), 2020. +Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural archi- +tecture search via parameters sharing. In Proceedings of the International Conference on +Machine Learning (ICML), 2018. +Alo¨ıs Pourchot, Alexis Ducarouge, and Olivier Sigaud. To share or not to share: A com- +prehensive appraisal of weight-sharing. arXiv preprint arXiv:2002.04289, 2020. +Vishak Prasad, Colin White, Paarth Jain, Sibasis Nayak, Rishabh Iyer, and Ganesh +Ramakrishnan. +Speeding up NAS with adaptive subset selection. +arXiv preprint +arXiv:2211.01454, 2022. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. +Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. +Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar. +Designing network design spaces. In The IEEE/CVF Conference on Computer Vision +and Pattern Recognition (CVPR), June 2020. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael +Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning +with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 2020. +Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex +Hanna. Ai and the everything in the whole wide world benchmark. Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and +Benchmarks Track, 2021. +Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, and Kenneth O. Stanley. +Synthetic petri dish: A novel surrogate model for rapid architecture search, 2020. +Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie +Tan, Quoc V. Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In +Proceedings of the International Conference on Machine Learning (ICML), 2017. +Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for +image classifier architecture search. In Proceedings of the AAAI Conference on Artificial +Intelligence (AAAI), 2019. +Esteban Real, Chen Liang, David So, and Quoc Le. Automl-zero: Evolving machine learning +algorithms from scratch. +In Proceedings of the International Conference on Machine +Learning (ICML), pages 8007–8019. PMLR, 2020. +Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, +and Xin Wang. A comprehensive survey of neural architecture search: Challenges and +solutions. arXiv preprint arXiv:2006.02903, 2020. +56 + +Neural Architecture Search: Insights from 1000 Papers +Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher R´e, and Ameet Tal- +walkar. Rethinking neural operations for diverse tasks. In Proceedings of the Annual +Conference on Neural Information Processing Systems (NeurIPS), 2021. +Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for +biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, +and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted In- +tervention – MICCAI 2015, 2015. +Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, and Yarin Gal. Revisiting the train +loss: an efficient performance estimator for neural architecture search. stat, 1050:8, 2020a. +Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne. +Neural architecture +search using bayesian optimisation with weisfeiler-lehman kernel. In Proceedings of the +International Conference on Learning Representations (ICLR), 2021. +Robin Ru, Pedro Esperan¸ca, and Fabio Maria Carlucci. +Neural architecture generator +optimization. Proceedings of the Annual Conference on Neural Information Processing +Systems (NeurIPS), 33, 2020b. +Michael Ruchte, Arber Zela, Julien Siems, Josif Grabocka, and Frank Hutter. Naslib: a +modular and flexible neural architecture search library, 2020. +Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, and Thomas Brox. Autodisp- +net: Improving disparity estimation with automl. In The IEEE International Conference +on Computer Vision (ICCV), October 2019. +Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and +Xi Chen. Improved techniques for training gans. Proceedings of the Annual Conference +on Neural Information Processing Systems (NeurIPS), 29, 2016. +Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. +Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE con- +ference on computer vision and pattern recognition, pages 4510–4520, 2018. +Santanu Santra, Jun-Wei Hsieh, and Chi-Fang Lin. Gradient descent effects on differential +neural architecture search: A survey. IEEE Access, 9:89602–89618, 2021. +Shreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), 2016. +Jurgen Schmidhuber. Evolutionary principles in self-referential learning. on learning how to +learn: The meta-meta-meta...-hook. Master’s thesis, Technische Universitaet Muenchen, +Germany, 1987. +J¨urgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic +recurrent networks. Neural Computation, 4(1):131–139, 1992. +J¨urgen Schmidhuber. A ‘self-referential’weight matrix. In International conference on arti- +ficial neural networks, pages 446–450. Springer, 1993. +57 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Lennart Schneider, Florian Pfisterer, Martin Binder, and Bernd Bischl. Mutation is all you +need. In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021. +Christoph Schorn, Thomas Elsken, Sebastian Vogel, Armin Runge, Andre Guntoro, and +Gerd Ascheid. Automated design of error-resilient and hardware-efficient deep neural +networks. In Springer Neural Computing and Applications, 2020. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal +policy optimization algorithms. ArXiv, abs/1707.06347, 2017. +Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. Eval- +uating the search phase of neural architecture search. In Proceedings of the International +Conference on Learning Representations (ICLR), 2020. +Gresa Shala, Thomas Elsken, Frank Hutter, and Josif Grabocka. Transfer NAS with meta- +learned bayesian surrogates. In Sixth Workshop on Meta-Learning at the Conference on +Neural Information Processing Systems, 2022. +Albert Shaw, Daniel Hunter, Forrest Landola, and Sammy Sidhu. Squeezenas: Fast neural +architecture search for faster semantic segmentation. In The IEEE International Confer- +ence on Computer Vision (ICCV) Workshops, Oct 2019. +Junhong Shen, Mikhail Khodak, and Ameet Talwalkar. Efficient architecture search for +diverse tasks. In Proceedings of the Annual Conference on Neural Information Processing +Systems (NeurIPS), 2022. +Yu Shen, Yang Li, Jian Zheng, Wentao Zhang, Peng Yao, Jixiang Li, Sen Yang, Ji Liu, +and Cui Bin. Proxybo: Accelerating neural architecture search via bayesian optimization +with zero-cost proxies. arXiv preprint arXiv:2110.10423, 2021. +Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang. Bridging the gap +between sample-based and one-shot neural architecture search with bonas. In Proceedings +of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020. +Jae-hun Shim, Kyeongbo Kong, and Suk-Ju Kang. Core-set sampling for efficient neural +architecture search. arXiv preprint arXiv:2107.06869, 2021. +Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi, and Bryan Kian Hsiang Low. +Nasi: Label-and data-agnostic neural architecture search at initialization. In Proceedings +of the International Conference on Learning Representations (ICLR), 2021. +Yao Shu, Yizhou Chen, Zhongxiang Dai, and Bryan Low. +Neural ensemble search via +bayesian sampling. In Uncertainty in Artificial Intelligence (UAI), 2022. +Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hut- +ter. Nas-bench-301 and the case for surrogate benchmarks for neural architecture search. +arXiv preprint arXiv:2008.09777, 2020. +58 + +Neural Architecture Search: Insights from 1000 Papers +David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van +Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc +Lanctot, et al. Mastering the game of go with deep neural networks and tree search. +Nature, 529(7587):484–489, 2016. +David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, +Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Master- +ing the game of go without human knowledge. Nature, 550(7676):354–359, 2017. +David So, Quoc Le, and Chen Liang. +The evolved transformer. +In Proceedings of the +International Conference on Machine Learning (ICML). PMLR, 2019. +David R. So, Wojciech Ma´nke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V. Le. +Primer: Searching for efficient transformers for language modeling, 2021. +Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Gold- +stein. Saint: Improved neural networks for tabular data via row attention and contrastive +pre-training. arXiv preprint arXiv:2106.01342, 2021. +Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, and Yunhe Wang. Efficient resid- +ual dense block search for image super-resolution. In Proceedings of the AAAI Conference +on Artificial Intelligence (AAAI), volume 34, pages 12007–12014, 2020. +Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian opti- +mization with robust bayesian neural networks. In Proceedings of the Annual Conference +on Neural Information Processing Systems (NeurIPS), pages 4134–4142, 2016. +Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger. Gaussian process +optimization in the bandit setting: No regret and experimental design. In Proceedings of +the 27th International Conference on Machine Learning. Omnipress, 2010. +Kenneth O Stanley and Risto Miikkulainen. Evolving neural networks through augmenting +topologies. Evolutionary computation, 10(2):99–127, 2002. +Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci. A hypercube-based encoding +for evolving large-scale neural networks. Artificial life, 15(2):185–212, 2009. +Rainer Storn and Kenneth Price. Differential evolution – a simple and efficient heuristic +for global optimization over continuous spaces. J. of Global Optimization, 11(4):341–359, +dec 1997. +Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, Changshui Zhang, +Xiaogang Wang, and Chang Xu. Vitas: Vision transformer architecture search. arXiv +preprint arXiv:2106.13700, 2021. +Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune. +Generative teaching networks: Accelerating neural architecture search by learning to +generate synthetic training data. In Proceedings of the International Conference on Ma- +chine Learning (ICML), pages 9206–9216. PMLR, 2020. +59 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao. A genetic programming +approach to designing convolutional neural network architectures. In Proceedings of the +genetic and evolutionary computation conference, pages 497–504, 2017. +Masanori Suganuma, Mete Ozay, and Takayuki Okatani. Exploiting the potential of stan- +dard convolutional autoencoders for image restoration by evolutionary search. In Pro- +ceedings of the International Conference on Machine Learning (ICML), pages 4771–4780. +PMLR, 2018. +Rhea Sukthanker, Samuel Dooley, John P Dickerson, Colin White, Frank Hutter, and Micah +Goldblum. On the importance of architectures and hyperparameters for fairness in face +recognition. arXiv preprint arXiv:2210.09943, 2022. +Yanan Sun, Bing Xue, Mengjie Zhang, and Gary G Yen. Evolving deep convolutional neural +networks for image classification. IEEE Transactions on Evolutionary Computation, 24 +(2):394–407, 2019. +Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Jiancheng Lv. Automatically +designing cnn architectures using the genetic algorithm for image classification. IEEE +transactions on cybernetics, 50(9):3840–3854, 2020. +Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, and Michael A. Osborne. +Raiders of the lost architecture: Kernels for bayesian optimization in conditional param- +eter spaces. arXiv preprint arXiv:1409.4011, 2014. +Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, +inception-resnet and the impact of residual connections on learning. In Thirty-first AAAI +conference on artificial intelligence, 2017. +Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural +networks. In Proceedings of the International Conference on Machine Learning (ICML), +pages 6105–6114. PMLR, 2019. +Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, +and Quoc V Le. +Mnasnet: Platform-aware neural architecture search for mobile. +In +Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. +Pruning neural +networks without any data by iteratively conserving synaptic flow. Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), 33:6377–6389, +2020. +Manoel Tenorio and Wei-Tsih Lee. Self organizing neural networks for the identification +problem. Proceedings of the Annual Conference on Neural Information Processing Sys- +tems (NeurIPS), 1, 1988. +Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Husz´ar. Faster gaze prediction +with dense networks and fisher pruning. arXiv preprint arXiv:1801.05787, 2018. +60 + +Neural Architecture Search: Insights from 1000 Papers +C. Thornton, F. Hutter, H. Hoos, and K. Leyton-Brown. Auto-WEKA: combined selection +and hyperparameter optimization of classification algorithms. In I. Dhillon, Y. Koren, +R. Ghani, T. Senator, P. Bradley, R. Parekh, J. He, R. Grossman, and R. Uthurusamy, +editors, The 19th ACM SIGKDD International Conference on Knowledge Discovery and +Data Mining (KDD’13), pages 847–855, 2013. +Sebastian Thrun and Lorien Pratt. Learning to learn. In Springer Science+Business Media, +1998. +Yuan Tian, Qin Wang, Zhiwu Huang, Wen Li, Dengxin Dai, Minghao Yang, Jun Wang, and +Olga Fink. Off-policy reinforcement learning for efficient and effective gan architecture +search. In European Conference on Computer Vision, pages 175–192. Springer, 2020. +Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, +and Herv´e J´egou. Training data-efficient image transformers & distillation through at- +tention. In International Conference on Machine Learning, pages 10347–10357. PMLR, +2021. +Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, and Ameet +Talwalkar. NAS-bench-360: Benchmarking neural architecture search on diverse tasks. +In Proceedings of the Annual Conference on Neural Information Processing Systems +(NeurIPS), Datasets and Benchmarks Track, 2022a. +Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala, +Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, and Colin White. Automl +for climate change: A call to action. arXiv preprint arXiv:2210.03324, 2022b. +Joaquin Vanschoren. Meta-learning. In Hutter et al. (2019), pages 39–68. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N +Gomez, �Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings +of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages +5998–6008, 2017. +Xingchen Wan, Binxin Ru, Pedro M Esparan¸ca, and Fabio Maria Carlucci. Approximate +neural architecture search via operation distribution learning. +In Proceedings of the +IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2377–2386, +2022a. +Xingchen Wan, Binxin Ru, Pedro M Esperan¸ca, and Zhenguo Li. +On redundancy and +diversity in cell-based neural architecture search. +In Proceedings of the International +Conference on Learning Representations (ICLR), 2022b. +Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training +by preserving gradient flow. In Proceedings of the International Conference on Learning +Representations (ICLR), 2020a. +Hanchao Wang and Jun Huan. Agan: Towards automated design of generative adversarial +networks. arXiv preprint arXiv:1906.11080, 2019. +61 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Linnan Wang, Yiyang Zhao, Yuu Jinnai, and Rodrigo Fonseca. Alphax: exploring neural +architectures with deep neural networks and monte carlo tree search. +arXiv preprint +arXiv:1805.07440, 2018. +Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, and Rodrigo Fonseca. Neural +architecture search using deep neural networks and monte carlo tree search. In Proceedings +of the AAAI Conference on Artificial Intelligence, volume 34, number 06, pages 9983– +9991, 2020b. +Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen, and Yanning +Zhang. Nas-fcos: Fast neural architecture search for object detection. In The IEEE/CVF +Conference on Computer Vision and Pattern Recognition (CVPR), June 2020c. +Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho-Jui Hsieh. +Rethinking architecture selection in differentiable nas. In Proceedings of the International +Conference on Learning Representations (ICLR), 2021. +Zi Wang and Stefanie Jegelka. Max-value entropy search for efficient bayesian optimization. +In Proceedings of the International Conference on Machine Learning (ICML), pages 3627– +3635. PMLR, 2017. +Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen. Network morphism. In Pro- +ceedings of the International Conference on Machine Learning (ICML), 2016. +Lilian Weng. Neural architecture search, 2020. URL https://lilianweng.github.io/ +posts/2020-08-06-nas/. +Colin White, Willie Neiswanger, Sam Nolen, and Yash Savani. A study on encodings for +neural architecture search. In Proceedings of the Annual Conference on Neural Informa- +tion Processing Systems (NeurIPS), 2020. +Colin White, Willie Neiswanger, and Yash Savani. Bananas: Bayesian optimization with +neural architectures for neural architecture search. In Proceedings of the AAAI Conference +on Artificial Intelligence (AAAI), 2021a. +Colin White, Sam Nolen, and Yash Savani. Exploring the loss landscape in neural archi- +tecture search. In Uncertainty in Artificial Intelligence (UAI), pages 654–664. PMLR, +2021b. +Colin White, Arber Zela, Binxin Ru, Yang Liu, and Frank Hutter. +How powerful are +performance predictors in neural architecture search? +In Proceedings of the Annual +Conference on Neural Information Processing Systems (NeurIPS), 2021c. +Colin White, Mikhail Khodak, Renbo Tu, Shital Shah, S´ebastien Bubeck, and Dey De- +badeepta. A deeper look at zero-cost proxies for lightweight nas. In ICLR Blog Track, +2022. URL http://0.0.0.0:4000/2021/12/01/zero-cost-proxies/. +Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist rein- +forcement learning. Mach. Learn., 8(3–4):229–256, may 1992. +62 + +Neural Architecture Search: Insights from 1000 Papers +Martin Wistuba. Finding competitive network architectures within a day using uct. Proceed- +ings of the 5th IEEE International Conference on Data Science and Advanced Analytics, +pages 263-272, 2018. arXiv preprint arXiv:1712.07420. +Martin Wistuba. +Deep learning architecture search by neuro-cell-based evolution with +function-preserving mutations. +In Michele Berlingerio, Francesco Bonchi, Thomas +G¨artner, Neil Hurley, and Georgiana Ifrim, editors, Machine Learning and Knowledge +Discovery in Databases, pages 243–258, Cham, 2019. Springer International Publishing. +Martin Wistuba, Ambrish Rawat, and Tejaswini Pedapati. A survey on neural architecture +search. arXiv preprint arXiv:1905.01392, 2019. +Catherine Wong, Neil Houlsby, Yifeng Lu, and Andrea Gesmundo. Transfer learning with +neural automl. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, +and R. Garnett, editors, Proceedings of the Annual Conference on Neural Information +Processing Systems (NeurIPS), 2018. +Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong +Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient +convnet design via differentiable neural architecture search. In The IEEE Conference on +Computer Vision and Pattern Recognition (CVPR), June 2019a. +Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong +Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient con- +vnet design via differentiable neural architecture search. In Proceedings of the IEEE/CVF +Conference on Computer Vision and Pattern Recognition (CVPR), pages 10734–10742, +2019b. +Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu Timofte, and Luc +Van Gool. Trilevel neural architecture search for efficient single image super-resolution. +arXiv preprint arXiv:2101.06658, 2021. +Lichuan Xiang, �Lukasz Dudziak, Mohamed S Abdelfattah, Thomas Chau, Nicholas D +Lane, and Hongkai Wen. Zero-cost proxies meet differentiable architecture search. arXiv +preprint arXiv:2106.06799, 2021. +Lingxi Xie and Alan Yuille. Genetic cnn. In Proceedings of the IEEE international confer- +ence on computer vision, pages 1379–1388, 2017. +Lingxi Xie, Xin Chen, Kaifeng Bi, Longhui Wei, Yuhui Xu, Lanfei Wang, Zhengsu Chen, +An Xiao, Jianlong Chang, Xiaopeng Zhang, et al. Weight-sharing neural architecture +search: A battle to shrink the optimization gap. ACM Computing Surveys (CSUR), 54 +(9):1–37, 2021. +Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architec- +ture search. In Proceedings of the International Conference on Learning Representations +(ICLR), 2018. +63 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Auto-fpn: Automatic +network architecture adaptation for object detection beyond classification. In The IEEE +International Conference on Computer Vision (ICCV), October 2019a. +Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu. Nas- +bert. +Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & +Data Mining, Aug 2021a. doi: 10.1145/3447548.3467262. URL http://dx.doi.org/10. +1145/3447548.3467262. +Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian +Li. Analyzing and mitigating interference in neural architecture search. In Proceedings +of the International Conference on Machine Learning (ICML). PMLR, 2022. +Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang. Knas: +green neural architecture search. In International Conference on Machine Learning, pages +11613–11625. PMLR, 2021b. +Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai +Xiong. Pc-darts: Partial channel connections for memory-efficient architecture search. In +Proceedings of the International Conference on Learning Representations (ICLR), 2019b. +Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang. Does unsupervised architecture +representation learning help neural architecture search? +In Proceedings of the Annual +Conference on Neural Information Processing Systems (NeurIPS), 2020. +Shen Yan, Kaiqiang Song, Fei Liu, and Mi Zhang. Cate: Computation-aware neural archi- +tecture encoding with transformers. In Proceedings of the International Conference on +Machine Learning (ICML), 2021a. +Shen Yan, Colin White, Yash Savani, and Frank Hutter. Nas-bench-x11 and the power of +learning curves. In Proceedings of the Annual Conference on Neural Information Process- +ing Systems (NeurIPS), 2021b. +Antoine Yang, Pedro M Esperan¸ca, and Fabio M Carlucci. +Nas evaluation is frustrat- +ingly hard. In Proceedings of the International Conference on Learning Representations +(ICLR), 2020. +Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li. Sm-nas: Structural- +to-modular neural architecture search for object detection. In Proceedings of the AAAI +Conference on Artificial Intelligence (AAAI), 2020. +Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu, +Qiang Yang, and Yang Yu. Taking human out of learning applications: A survey on +automated machine learning. arXiv preprint arXiv:1810.13306, 2018. +Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Autotinybert: +Automatic hyper-parameter optimization for efficient pre-trained language models. In +ACL, 2021. +64 + +Neural Architecture Search: Insights from 1000 Papers +Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hut- +ter. Nas-bench-101: Towards reproducible neural architecture search. In Proceedings of +the International Conference on Machine Learning (ICML), 2019. +Kaicheng Yu, Rene Ranftl, and Mathieu Salzmann. +How to train your super-net: An +analysis of training heuristics in weight-sharing nas. arXiv preprint arXiv:2003.04276, +2020. +Tong Yu and Hong Zhu. Hyper-parameter optimization: A review of algorithms and appli- +cations. arXiv preprint arXiv:2003.05689, 2020. +Sergey Zagoruyko and Nikos Komodakis. +Wide residual networks. +In British Machine +Vision Conference, 2016. +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhut- +dinov, and Alexander J Smola. Deep sets. In Proceedings of the Annual Conference on +Neural Information Processing Systems (NeurIPS), 2017. +Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C Holmes, Frank Hutter, and Yee Teh. +Neural ensemble search for uncertainty estimation and dataset shift. Proceedings of the +Annual Conference on Neural Information Processing Systems (NeurIPS), 34:7898–7911, +2021. +Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio +Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE +conference on computer vision and pattern recognition, pages 3712–3722, 2018. +Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter. +Towards automated deep +learning: Efficient joint neural architecture and hyperparameter search. arXiv preprint +arXiv:1807.06906, 2018. +Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank +Hutter. Understanding and robustifying differentiable architecture search. In Proceedings +of the International Conference on Learning Representations (ICLR), 2020a. +Arber Zela, Julien Siems, and Frank Hutter. Nas-bench-1shot1: Benchmarking and dissect- +ing one-shot neural architecture search. In Proceedings of the International Conference +on Learning Representations (ICLR), 2020b. +Chris Zhang, Mengye Ren, and Raquel Urtasun. Graph hypernetworks for neural architec- +ture search. In Proceedings of the International Conference on Learning Representations +(ICLR), 2018. +Haokui Zhang, Ying Li, Hao Chen, and Chunhua Shen. Memory-efficient hierarchical neural +architecture search for image denoising. In Proceedings of the IEEE/CVF Conference on +Computer Vision and Pattern Recognition (CVPR), pages 3657–3666, 2020a. +Miao Zhang, Steven W Su, Shirui Pan, Xiaojun Chang, Ehsan M Abbasnejad, and Reza +Haffari. idarts: Differentiable architecture search with stochastic implicit gradients. In +International Conference on Machine Learning, pages 12557–12566. PMLR, 2021a. +65 + +White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter +Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen. D-vae: A vari- +ational autoencoder for directed acyclic graphs. In Proceedings of the Annual Conference +on Neural Information Processing Systems (NeurIPS), 2019. +Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen +Zhang, and Yaming Yang. Deeper insights into weight sharing in neural architecture +search. arXiv preprint arXiv:2001.01431, 2020b. +Ziwei Zhang, Xin Wang, and Wenwu Zhu. +Automated machine learning on graphs: A +survey. IJCAI Survey Track, 2021b. arXiv preprint arXiv:2103.00742. +Huan Zhao, Lanning Wei, and Quanming Yao. Simplifying architecture search for graph +neural network. arXiv preprint arXiv:2008.11652, 2020a. +Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, and Mateja Jamnik. Prob- +abilistic dual network architecture search on graphs. arXiv preprint arXiv:2003.09676, +2020b. +Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, and Yuandong Tian. +Multi-objective optimization by learning space partition. In International Conference on +Learning Representations, 2021a. +Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, and Weizhu Chen. Memory- +efficient differentiable transformer architecture search. Findings of the Association for +Computational Linguistics, 2021b. +Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, +and Wanli Ouyang. Econas: Finding proxies for economical neural architecture search. In +Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition +(CVPR), pages 11396–11404, 2020. +Kaichen Zhou, Lanqing Hong, Shoukang Hu, Fengwei Zhou, Binxin Ru, Jiashi Feng, and +Zhenguo Li. Dha: End-to-end joint optimization of data augmentation policy, hyper- +parameter and architecture. arXiv preprint arXiv:2109.05765, 2021. +Kaixiong Zhou, Qingquan Song, Xiao Huang, and Xia Hu. Auto-gnn: Neural architecture +search of graph neural networks. arXiv preprint arXiv:1909.03184, 2019. +Lucas Zimmer, Marius Lindauer, and Frank Hutter. Auto-pytorch tabular: Multi-fidelity +metalearning for efficient and robust autodl. IEEE Transactions on Pattern Analysis and +Machine Intelligence, 2021. +Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In +Proceedings of the International Conference on Learning Representations (ICLR), 2017. +Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable +architectures for scalable image recognition. In CVPR, 2018. +66 + diff --git a/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt b/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..15c35d2bbbd5ac426ed06fedbc4a2ba3ac102c0b --- /dev/null +++ b/adFAT4oBgHgl3EQf4h70/content/tmp_files/load_file.txt @@ -0,0 +1,2944 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf,len=2943 +page_content='Neural Architecture Search: Insights from 1000 Papers Colin White colin@abacus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='ai Abacus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='AI San Francisco, CA 94105, USA Mahmoud Safari safarim@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Rhea Sukthanker sukthank@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Binxin Ru robinru@sailyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='com Sailyond Technology & Research Institute of Tsinghua University Shenzhen, 518071, China Thomas Elsken thomas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='elsken@de.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='bosch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='com Bosch Center for Artificial Intelligence Renningen, 71272, Germany Arber Zela zelaa@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='de University of Freiburg Freiburg im Breisgau, 79110, Germany Debadeepta Dey dedey@microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='com Microsoft Research Redmond, WA 98052, USA Frank Hutter fh@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='uni-freiburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='de University of Freiburg & Bosch Center for Artificial Intelligence Freiburg im Breisgau, 79110, Germany Abstract In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this survey, we provide an organized and comprehensive guide to neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Keywords: neural architecture search, automated machine learning, deep learning ©2022 Colin White, Mahmoud Safari, Rhea Sukthanker, Binxin Ru, Thomas Elsken, Arber Zela, Debadeepta Dey and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' License: CC-BY 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0, see https://creativecommons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='org/licenses/by/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='08727v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='LG] 20 Jan 2023 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Introduction In the past decade, deep learning has become the dominant paradigm in machine learning for a variety of applications and has been used in a number of breakthroughs across computer vision (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), natural language understanding (Bahdanau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hochreiter and Schmidhuber, 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), speech recognition (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chorowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hannun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2014), and reinforcement learning (Mnih et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' it is also becoming a very powerful approach for the analysis of tabular data (Hollmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kadra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Somepalli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While many factors played into the rise of deep learning approaches, including deep learning’s ability to automate feature extraction, as well as an increase in data and the larger availability of computational resources, the design of high-performing neural architectures has been crucial to the success of deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Recently, just as manual feature engineering was replaced by automated feature learning via deep learning, it is getting more and more common to automate the time-consuming architecture design step via neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, has already outpaced the best human-designed architectures on many tasks (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), notably ImageNet (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), as well as diverse and less-studied datasets (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022), and in memory- or latency-constrained settings (Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Indeed, in the past few years, research in NAS has been progressing rapidly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although several surveys have been written for NAS and related areas in the past (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019, also see Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), over 1000 new NAS papers have been released in the last two years, warranting the need for an up-to-date survey on over-arching advances, which we aim to provide with this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 A Brief History of NAS and Relation to Other Fields NAS emerged as a subfield of automated machine learning (AutoML) (Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), the process of automating all steps in the machine learning pipeline, from data cleaning, to feature engineering and selection, to hyperparameter and architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS has a large overlap with hyperparameter optimization (HPO) (Feurer and Hutter, 2019), which refers to the automated optimization of hyperparameters of the machine learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS is sometimes referred to as a subset of HPO (Li and Talwalkar, 2019), since NAS can be expressed as optimizing only the hyperparameters that correspond to the architecture, a subset of the entire set of model hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, the techniques for HPO vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS are often substantially different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A typical HPO problem optimizes a mix of continuous and categorical hyperparameters, such as learning rate, dropout rate, batch size, momentum, activation function, normaliza- tion strategy, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Typically, the domains of most hyperparameters are independent (that is, the set of possible values for each hyperparameter is not affected by the possible values of other hyperparameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, the typical search space of an HPO problem is the product space of a mix of continuous and categorical dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' By contrast, NAS is specifically focused on optimizing the topology of the architecture, which can be much more complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The topology is typically represented by a directed acyclic graph (DAG), in 2 Neural Architecture Search: Insights from 1000 Papers which the nodes or edges are labeled by neural network operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, the search space of a NAS problem is typically discrete1 and can be represented directly as a graph, or as a hierarchical structure of conditional hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although standard HPO algorithms can sometimes be adapted for NAS (Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mendoza et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), it is often much more efficient and effective to use NAS techniques which are tailored to optimize the intricate space of neural architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, most modern NAS techniques go beyond black-box optimization algorithms by exploiting details specific to NAS, such as sharing weights among similar neural architectures to avoid training each of them from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2015 2016 2017 2018 2019 2020 2021 2022 Year 0 100 200 300 400 500 600 700 Num.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' papers Figure 1: Number of NAS papers by year.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Historically, NAS has been around since at least the late 1980s (Angeline et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kitano, 1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Miller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1989;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Teno- rio and Lee, 1988) but it did not gain widespread attention until the popular pa- per, NAS with Reinforcement Learning, by Zoph and Le (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There has since been a huge interest in NAS, with over 1000 papers released in the last two years (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' By now, many different approaches, such as reinforcement learning, evolution- ary algorithms, Bayesian optimization, and NAS-specific techniques based on weight sharing have been explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Perhaps the most popular recent approaches are one- shot techniques (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c), which often substantially speed up the search process compared to black-box optimization techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In recent years, a large body of follow-up work has focused on making one-shot methods more robust and reli- able (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In parallel, there has been a large push to make NAS research more reproducible and scientific, starting with the release of NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), the first tabular benchmark for NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, while the early days of NAS has mostly focused on image classification problems such as CIFAR-10 and Ima- geNet, the field has now expanded to many other domains, such as object detection (Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a), semantic segmentation (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a), speech recognition (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), partial differential equation solving (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022a), protein folding (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022), and weather prediction (Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022b), and the field has seen a renewed interest in natural language processing (Chitty-Venkata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Javaheripi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Background and Definitions Prior NAS surveys (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) have referred to three dimensions of NAS: search space, search strategy, and performance evaluation strategy (see 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Notably, some NAS techniques such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c) relax the domain to be continuous during the search, but then the hyperparameters are discretized in order to return the final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Search Strategy Performance Estimation Strategy Search Space Architecture a Performance estimate of a One-shot methods: jointly learning architecture hyperparameters and weights Architecture encoding method Figure 2: Overview of neural architecture search (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Weng, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A search strategy iteratively selects architectures (typically by using an architecture encoding method) from a predefined search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The architectures are passed to a performance estimation strategy, which returns the performance estimate to the search strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For one-shot methods, the search strategy and performance estimation strategy are inherently coupled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We define each term below, as this is a useful disambiguation for understanding many NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, it is worth noting that the trichotomy cannot be applied to the large sub-area of one-shot methods, because for these methods, the search strategy is coupled with the performance evaluation strategy (Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A search space is the set of all architectures that the NAS algorithm is allowed to select.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Common NAS search spaces range in size from a few thousand to over 1020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While the search space in principle can be extremely general, incorporating domain knowledge when designing the search space can simplify the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, adding too much domain knowledge introduces human bias, which reduces the chances of a NAS method finding truly novel architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Search spaces are discussed in more detail in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A search strategy is an optimization technique used to find a high-performing archi- tecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are generally two main categories of search strategies: black-box optimization based techniques (including multi-fidelity techniques) and one-shot techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, there are some NAS methods for which both or neither category ap- plies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Black-box optimization based techniques, such as reinforcement learning, Bayesian optimization, and evolutionary search, are surveyed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One-shot methods, in- cluding supernet- and hypernet-based methods, are surveyed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A performance estimation strategy is any method used to quickly predict the perfor- mance of neural architectures in order to avoid fully training the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, while we can run a discrete search strategy by fully training and evaluating architectures chosen throughout the search, using a performance estimation strategy such as learning curve extrapolation can greatly increase the speed of the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Performance estimation strategies, and more generally speedup techniques, are surveyed in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The most basic definition of NAS is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Given a search space A , a dataset D, a training pipeline P, and a time or computation budget t, the goal is to find an architecture a ∈ A within budget t which has the highest possible validation accuracy when trained using dataset D and training pipeline P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A common method of approaching NAS is to 4 EANeural Architecture Search: Insights from 1000 Papers approximately solve the following expression within time t: min a∈A Lval (w∗(a), a) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' w∗(a) = argminw Ltrain (w, a) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Here, Lval and Ltrain denote the validation loss and training loss, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While this is the core definition of NAS, other variants will be discussed throughout this survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For ex- ample, we may want to return an architecture with constraints on the number of parameters (Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), or we may use meta-learning (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3) to improve performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Throughout the rest of this article, we provide a comprehensive guide to the latest NAS techniques and resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sections 2 to 5 are devoted to NAS techniques, surveying search spaces, black-box optimization techniques, one-shot techniques, and speedup techniques, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sections 6 to 10 cover extensions, applications, and resources, and Section 11 concludes by discussing promising future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Search Spaces The search space is perhaps the most essential ingredient of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While other areas of AutoML overlap with NAS in terms of the optimization methods used, the architectural search space is unique to NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, the search space is often the first step when setting up NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The majority of popular search spaces are task-specific and were heavily inspired by the state-of-the-art manual architectures in their respective application domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, NAS-Bench-101, a popular image classification search space (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) was inspired by ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016a) and Inception (Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In fact, the design of the search space represents an important trade-off between human bias and efficiency of search: if the size of the search space is small and includes many hand- picked decisions, then NAS algorithms will have an easier time finding a high-performing architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the other hand, if the search space is large with more primitive building blocks, a NAS algorithm will need to run longer, but there is the possibility of discovering truly novel architectures (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this section, we survey the main categories of search spaces for NAS as summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We start in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 by defining general terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Sections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3, we discuss the relatively simple macro and chain-structured search spaces, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4, we describe the most popular type of search space: the cell-based search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5, we describe hierarchical search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='6, we discuss architecture encodings, an important design decision for NAS algorithms that is inherently tied to the choice of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Terminology The search space terminologies differ across the literature, depending on the type of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For clarity, we define the main terms here and in Appendix Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Operation/primitive denotes the atomic unit of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For nearly all popular search spaces, this is a triplet of a fixed activation, operation, and fixed normalization, such as ReLU-conv 1x1-batchnorm, where the ReLU and BatchNorm are fixed, and the middle operation is a choice among several different operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Search Spaces Structure Searchable hyperparameters Levels of Topology Macro search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NASBOT (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), EfficientNet (Tan and Le, 2019) DAG Operation types, DAG topology, macro hyperparameters 1 Chain-structured search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' MobileNetV2 (Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) Chain Operation types, macro hyperparameters 1 Cell-based search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c) Duplicated cells Operation type, cell topology 1 Hierarchical search space e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Repr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018b), Auto-DeepLab (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b) Varied Operation type, cell/DAG topology, macro hyperparameters > 1 Table 1: Summary of the types of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Layer is often used in chain-structured or macro search spaces to denote the same thing as an operation or primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, it sometimes refers to well-known combinations of operations, such as the inverted bottleneck residual (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan and Le, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Block/Module is sometimes used to denote a sequential stack of layers following the notation used in most chain-structured and macro search spaces (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan and Le, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cell is used to denote a directed acyclic graph of operations in cell-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The maximum number of operations in a cell is often fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Motif is used to denote a sub-pattern formed from multiple operations in an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Some literature refers to a cell as a higher-level motif and a smaller set of operations as a base-level motif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Macro Search Spaces In the NAS literature, macro search spaces may refer to one of two types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' First, they may refer to search spaces which encode the entire architecture in one level (as opposed to cell- based or hierarchical search spaces), which were popular in 2017 and 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Second, they may refer to search spaces which focus only on macro-level hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For the former, an entire architecture is represented as a single directed acyclic graph (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These search spaces typically have a choice of operation at each node in the graph, as well as the choice of DAG topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, the NASBOT CNN search space (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) consists of choices of different convolution, pooling, and fully connected layers, with any DAG topology, with depth of at most 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The second type of macro search spaces (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan and Le, 2019), focus on the variation of macro-level hyperparameters, such as where and how much to downsample the spatial resolution throughout the architecture, while keeping the 6 Neural Architecture Search: Insights from 1000 Papers architecture topology and operations fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 For example, Tan and Le (2019) propose a CNN search space by varying the network depth, width, and input feature resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Compared to other search spaces, macro search spaces have high representation power: their flexible structure allows the possibility of discovering novel architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, their main downside is that they are very slow to search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the next two sections, we discuss types of search spaces which have more rigidity, making them faster to search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Chain-Structured Search Spaces Chain-structured search spaces, as the name suggests, have a simple architecture topology: a sequential chain of operation layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' They often take state-of-the-art manual designs, such as ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016b) or MobileNets (Howard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), as the backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are several chain-structured search spaces based on convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Prox- ylessNAS (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) starts with the MobileNetV2 (Sandler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) architecture and searches over the kernel sizes and expansion ratios in the inverted bottleneck residual layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' XD (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) and DASH (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) start with a LeNet (LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1999), ResNet (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016a), or WideResNet (Zagoruyko and Komodakis, 2016), and search over an expressive generalization of convolutions based on Kaleidoscope matrices (Dao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), or kernel sizes and dilations, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chain-structured search spaces are also popular in transformer-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, the search space from Lightweight Transformer Search (LTS) (Javaheripi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) consists of a chain-structured configuration of the popular GPT family of architectures (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) for autoregressive language modeling, with searchable choices for the number of layers, model dimension, adaptive embedding dimension, dimension of the feedforward neural network in a transformer layer, and number of heads in each transformer layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The search spaces from NAS-BERT (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a) and MAGIC (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) both consist of a chain-structured search space over the BERT architecture (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) with up to 26 operation choices consisting of variants of multi-head attention, feedforward layers, and convolutions with different kernel sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chain-structured search spaces are conceptually simple, making them easy to design and implement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' They also often contain strong architectures that can be found relatively quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Their main downside is that, due to the simple architecture topology, there is a comparatively lower chance of discovering a truly novel architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4 Cell-based Search Spaces The cell-based search space is perhaps the most popular type of search space in NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It is inspired by the fact that state-of-the-art human-designed CNNs often consist of repeated patterns, for example, residual blocks in ResNets (Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thus, instead of searching for the entire network architecture from scratch, Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018) proposed to only search over relatively small cells, and stack the cells several times in sequence to form the overall architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Formally, the searchable cells make up the micro structure of the search space, while the outer skeleton (the macro structure) is fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Strictly speaking, since these search spaces have a fixed architecture topology, they may also be called hyperparameter tuning search spaces instead of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 7 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter NASNet Cell (0perations on Nodes) Concatenate hi hi-1 add Op Op add Op Op add Op Op … hi+1 DARTS Cell (0perations on Edges) hi hi-1 hi+1 0 1 2 3 Operation candidates Architecture Input Normal Cell Output Reduction Cell Normal Cell Reduction Cell Normal Cell x N x N x N Figure 3: Illustration of cell-based search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The outer skeleton across cells (left) is fixed, while the cells are searchable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NASNet assigns operations to nodes (middle) while DARTS assigns operations to edges (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The first modern cell-based search space, NASNet, was proposed by Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It comprises of two types of cells: the normal cell and the reduction cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Both types have the same structure, but the initial operations in the reduction cell have a stride of two to halve the input spatial resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Each NASNet cell can be represented as a DAG with seventeen non-input nodes (see Figure 3 (middle)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The nodes are arranged in triples of two operation nodes (such as convolution and pooling operations) and a combination node (such as addition or concatenation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The final NASNet architecture is formed by stacking multiple normal and reduction cells in sequence (see Figure 3 (left)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Overall, there are 1035 unique architectures in the NASNet search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Since the NASNet search space, many other cell search spaces have been proposed, all of which share a high-level similarity to NASNet, with the main differences being the fixed macro structure, the layout and constraints in the cells, and the choices of operations within the cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Two of the most popular cell-based search spaces are NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) and the DARTS search space (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS-Bench-101 is the first tabular benchmark for NAS (discussed in Section 8), and its cells consist of seven nodes, each with three choices of operations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' it contains 423 624 unique architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The DARTS search space differs more fundamentally: while it also has two searchable cells, the DARTS cells have operation choices on the edges of the graph rather than on the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the DARTS cell, the nodes represent latent representations and the edges are operations, whereas in the NASNet cell, the latent representations are on the edges and the nodes are operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The DARTS cells (see Figure 3 (right)) contain eight edges, each of which have eight choices of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Overall, the DARTS space contains a total of 1018 unique architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 8 Neural Architecture Search: Insights from 1000 Papers Besides image classification, similar cell designs have also been adopted for language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, NAS-Bench-ASR (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) provides a search space of convolutional speech model cells for automatic speech recognition, and there are several LSTM-based search spaces (Klyuchnikov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The cell-based design significantly reduces the complexity of search spaces, while often resulting in a high-performing final architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This has led to the cell-based search spaces being the most popular type of search space in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, by detaching the depth of an architecture from the search, the cell-based structure is transferable: the optimal cells learned on a small dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', CIFAR-10) typically transfer well to a large dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', ImageNet) by increasing the number of cells and filters in the overall architecture (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Despite their popularity, cell-based search spaces face some criticisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' First, while the DARTS search space contains a seemingly large number of 1018 architectures, the variance in the performance of DARTS architectures is rather small (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This small variance may contribute to the fact that sophisticated search strategies can only give marginal gains over the average performance of randomly sampled archi- tectures (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Moreover, there are many ad-hoc design choices and fixed hyperparameters that come with cell-based search spaces whose impact is unclear (Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022b), such as the separation of normal and reduction cells, number of nodes, and set of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, although limiting the search to a cell significantly reduces the search complexity, this practice reduces the expressiveness of the NAS search space, making it difficult to find highly novel architectures with cell search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In light of this, some recent work advocates for searching for macro connections among cells in addition to the micro cell structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We discuss this in more detail in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5 Hierarchical Search Spaces Up to this point, all search spaces described have had a flat representation, in which an architecture is built by defining its hyperparameters, topology, and operation primitives in a single design level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Specifically, only one level of topology is searched, whether at the cell level or architecture level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the other hand, hierarchical search spaces involve designing motifs at different levels, where each higher-level motif is often represented as a DAG of lower-level motifs (Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A simple class of hierarchical search spaces has two searchable levels by adding macro- level architecture hyperparameters to cell or chain-structured search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, the MnasNet search space (Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) uses MobileNetV2 as the backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019b) designed a two-level search space for semantic image segmentation, and follow-up work extended it to image denoising (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a) and stereo matching (Kumari and Kaur, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021a) propose a two-level transformer-based search space for vision tasks inspired by ViT (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) and DeiT (Touvron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The search space consists of a number of sequential blocks which can be a combination of local (convolution) or global (self-attention) layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Beyond two levels, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018b) and Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) propose hierarchies of three levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018b) propose a three-level hierarachy, where each level is a graph made up of components from the previous level (see Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) propose a different 9 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter three-level hierarchy, consisting of kernel hyperparameters, cell-based hyperparameters, and macro hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The former design is extended beyond three levels in two follow-up works: Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020b) proposed a hierarchical design of four levels, controlled by a set of hyperparameters corresponding to a random graph generator, and Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) introduced a recursive building process to permit a varying number of hierarchical levels as well as a flexible topology among top-level motifs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hierarchical Search Space 0 1 3 2 1 2 1 2 3x3 convolution 0 1 3 2 0 1 3 2 Level 3 Motif Level 2 Motif Level 1 Operation Primitives Level 3 Motif Graph Unrolled Figure 4: Illustration of hierarchical representation proposed in Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Level 1 of the hierarchy con- sists of choices of operation primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Level 2 con- sists of selecting the topology across small sets of operation primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Level 3 consists of selecting the topology across the constructions from level 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are multiple ben- efits to using hierarchical search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' First, hier- archical search spaces tend to be more expressive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Most chain-structured, cell-based, and macro search spaces can be seen as a hierarchical search space with a single searchable level, but having two or more levels allows us to search over more di- verse and complex architec- ture designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, a hierarchical representation of a large architecture is an effective way to reduce the search complexity, which can lead to better search effi- ciency (Chrostoforidis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the other hand, hierarchical search spaces can be more challenging to implement and search through.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='6 Architecture Encodings Throughout this section, we have discussed a wide variety of NAS search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As a segue into the next two sections focusing on search strategies, we note that many NAS algorithms and subroutines need to have a succinct representation of each architecture, or encoding, in order to perform operations such as mutating an architecture, quantifying the similarity between two architectures, or predicting the test performance of an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This makes architecture encodings important for several areas of NAS, including discrete NAS algorithms (Section 3) and performance prediction (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In most search spaces, the architecture can be represented compactly as a directed acyclic graph (DAG), where each node or edge represents an operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, architectures in cell-based search spaces and chain-structured search spaces can be represented in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, hierarchical search spaces cannot be represented fully using a DAG, and often need a conditionally-structured encoding, where the number of levels of conditional hyperparameters correspond to the number of levels of the hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 10 Neural Architecture Search: Insights from 1000 Papers For cell-based search spaces, one of the most commonly-used encodings is the adjacency matrix along with a list of operations, of the searchable cell(s) (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In order to have better generalizablility, Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) proposed a graph- based encoding scheme and White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021a) proposed a path-based encoding scheme, both of which model the flow of propagating information in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, another type of encoding for all search spaces is a learned encoding using unsupervised pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this technique, before we run NAS, we use a set of untrained architectures to learn an architecture encoding, for example, by using an autoencoder (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lukasik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) or a transformer (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' When choosing an architecture encoding, scalability and generalizability are important traits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Recent work has shown that different NAS subroutines, such as sampling a random architecture, perturbing an architecture, or training a surrogate model, may each perform best with different encodings (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, even small changes to the architecture encoding scheme can have significant effects on the performance of NAS (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Black-Box Optimization Techniques Now that we have covered search spaces, we move to perhaps the most widely-studied com- ponent of NAS: the search strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is what we run to find an optimal architecture from the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Search strategies generally fall into two categories: black-box op- timization techniques and one-shot techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, some methods that we discuss include characteristics of both, or neither, of these categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We first discuss black-box optimization techniques in this section, followed by one-shot techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For black-box optimization, we discuss baselines (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1), reinforcement learning (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), evolution (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3), Bayesian optimization (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4), and Monte-Carlo tree search (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Black-box optimization techniques are widely used and studied today, due to their strong performance and ease of use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In general, black-box optimization techniques tend to use more computational resources than one-shot techniques, due to training many architectures independently (without sharing weights across architectures like one-shot techniques).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, they also have many advantages over one-shot techniques, such as robustness (and the lack of catastrophic failure modes), simpler optimization of non- differentiable objectives, simpler parallelism, joint optimization with other hyperparameters, and easier adaptation to, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', new problems, datasets or search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' They are also often conceptually simpler, making them easier to implement and use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Baselines One of the simplest possible baselines for NAS is random search: architectures are selected randomly from the search space and then fully trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the end, the architecture with the best validation accuracy is outputted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Despite its na¨ıvet´e, multiple papers have shown that random search performs surprisingly well (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is especially true for highly engineered search spaces with a high fraction of strong architectures, since random search with a budget of k evaluations will, in expectation, find architectures in the top 100/k% of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, other works show that random search does not perform well on large, 11 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 1 General Reinforcement Learning NAS Algorithm Input: Search space A, number of iterations T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Randomly initialize weights θ of the controller architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , T do Train architecture a ∼ π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' θ), randomly sampled from the controller policy π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Update controller parameters θ by performing a gradient update ∇θEa∼π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='θ)[Lval(a)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' end for Output: Architecture selected from the trained policy π(a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' θ∗) diverse search spaces (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Still, random search is highly recommended as a baseline comparison for new NAS algorithms (Lindauer and Hutter, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), and can be made highly competitive by incorporating weight sharing (Li and Talwalkar, 2019), zero-cost proxies (Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), or learning curve extrapolation (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multiple papers (Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) have also proposed a related, simpler baseline: random sampling, the average performance of architectures across the entire search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In addition to random search, recent papers showed that local search is a strong baseline for NAS on both small (Ottelander et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b) and large (Siems et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is true even for the simplest form of local search: iteratively train and evaluate all of the neighbors of the best architecture found so far, where the neighborhood is typically defined as all architectures which differ by one operation or edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Local search can be sped up substantially by using network morphisms to warm-start the optimization of neighboring architectures (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Reinforcement Learning Reinforcement learning (RL) was very prominent in the early days of modern NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Notably, the seminal work by Zoph and Le (2017) used RL on 800 GPUs for two weeks to obtain competitive performance on CIFAR-10 and Penn Treebank;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' this finding received substantial media attention and started the modern resurgence of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This was followed up by several more reinforcement learning approaches (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Most reinforcement learning approaches model the architectures as a sequence of actions generated by a controller (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph and Le, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The validation accuracy of the sampled architectures after training is used as a reward signal to update the con- troller in order to maximize its expected value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' See Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The controller is usually a recurrent neural network (RNN) (Zoph and Le, 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) that outputs a sequence of components corresponding to an architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' After each outputted architec- ture is trained and evaluated, the RNN parameters are updated to maximize the expected validation accuracy of outputted architectures, using REINFORCE (Williams, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph and Le, 2017) or proximal policy optimization (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ENAS (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) follows a similar strategy but speeds up the reward estimation using weight sharing;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' we will discuss this in detail in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' More recently, RL has not been used prominently for NAS, since it has been shown to be outperformed in head-to-head comparisons by evolutionary methods (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) and Bayesian optimization (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), which we will discuss next.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 12 Neural Architecture Search: Insights from 1000 Papers Algorithm 2 General Evolutionary NAS Algorithm Input: Search space A, number of iterations T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Randomly sample and train a population of architectures from the search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , T do Sample (based on accuracy) a set of parent architectures from the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mutate the parent architectures to generate children architectures, and train them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Add the children to the population, and kill off the architectures that are the oldest (or have the lowest accuracy) among the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' end for Output: Architecture from the population with the highest validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Evolutionary and Genetic Algorithms Decades before the recent NAS resurgence, one of the first works in NAS used an evolution- ary algorithm (Miller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In other early works, it was common to use evolutionary algorithms to simultaneously optimize the neural architecture and its weights (Angeline et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Floreano et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stanley and Miikkulainen, 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stanley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Today, evolutionary algorithms are still popular for the optimization of architectures due to their flexibility, conceptual simplicity, and competitive results (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), but the weight optimization is typically left to standard SGD-based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolutionary NAS algorithms work by iteratively updating a population of architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In each step, one or more “parent” architectures in the population are sampled (typically based on the validation accuracy of the architectures), combined and mutated to create new “children” architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These architectures are then trained and added to the population, replacing individuals in the population with worse performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' See Algorithm 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are many other ways in which evolutionary algorithms differ, including sampling the initial population, selecting the parents, and generating the children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For selecting the initial population, approaches include using trivial architectures (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), randomly sampling architectures from the search space (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), or using hand-picked high-performing architectures (Fujino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Selecting parents from the population makes up one of the core components of the evolutionary algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Perhaps the most popular method to sample parents is tournament selection (Almalaq and Zhang, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Goldberg and Deb, 1991;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019, 2020), which selects the best architecture(s) out of a randomly sampled population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other common approaches include random sampling weighted by fitness (Gibb et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Loni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xie and Yuille, 2017), or choosing the current best architecture(s) as parents (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Suganuma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These methods trade off exploration vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' exploiting the best region found so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One particularly successful evolutionary algorithm is regularized evolution by Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is a fairly standard evolutionary method, with the novelty of dropping the architecture in each step that has been in the population for longest, even if it has the highest performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This method outperformed random search and RL in a head-to-head comparison and achieved state-of-the-art performance on ImageNet at the time of its release (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 13 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 3 General Bayesian Optimization NAS Algorithm Input: Search space A, number of iterations T, acquisition function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Randomly sample and train a population of architectures from the search space A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , T do Train a surrogate model based on the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Select architecture at by maximizing φ (a) , based on the surrogate model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Train architecture at and add it to the current population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' end for Output: Architecture from the population with the highest validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4 Bayesian Optimization Bayesian optimization (BO, see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Frazier (2018) or Garnett (2023)) is a powerful method for optimizing expensive functions, and it has seen significant success within NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are two key components to BO: (1) building a probabilistic surrogate to model the unknown objective based on past observations, and (2) defining an acquisition function to balance the exploration and exploitation during the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' BO is an iterative algorithm which works by selecting the architecture that maximizes the acquisition function (computed us- ing the surrogate), training this architecture, and retraining the surrogate using this new architecture to start the next iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' See Algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Initial BO-based NAS techniques developed custom distance metrics among architec- tures, for example, with a specialized architecture kernel (Swersky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2014), an opti- mal transport-inspired distance function (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), or a tree-Wasserstein distance function (Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), allowing a typical Gaussian process (GP) based surrogate with BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, using a standard GP surrogate often does not perform well for NAS, as search spaces are typically high-dimensional, non-continuous, and graph-like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' To overcome this, one line of work first encodes the architectures, using encodings discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='6, and then trains a model, such as a tree-Parzen estimator (Bergstra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Falkner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), random forest (Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), or neural network (Springenberg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Another line of work projects architecture information into a low-dimensional continuous latent space on which conven- tional BO can be applied effectively (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Another class of surrogate models use graph neural networks (Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) or a graph-based kernel (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) to naturally handle the graph representation of architectures without the need for an explicit encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The acquisition function, which trades off exploration and exploitation during the search, is another important design component for BO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There are various types of acquisition func- tions used in NAS, such as expected improvement (Jones et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Moˇckus, 1975), upper confidence bound (Cox and John, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Srinivas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2010) and information-theoretic ones (Hennig and Schuler, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hern´andez-Lobato et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hvarfner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wang and Jegelka, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In NAS, optimizing the acquisition function in each round of BO is chal- lenging due to the non-continuous search spaces, and furthermore, exhaustively evaluating acquisition function values on all possible architectures is computationally non-viable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The most common method for optimizing the acquisition function in NAS is by randomly mu- tating a small pool of the best architectures queried so far, and of the mutated architectures, 14 Neural Architecture Search: Insights from 1000 Papers selecting the one(s) with the highest acquisition function value (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Schneider et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other methods for optimizing the acqusition function include local search, evolutionary search, and random search (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5 Monte Carlo Tree Search Another class of NAS methods is based on Monte Carlo Tree Search (MCTS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' MCTS is the key backbone search algorithm used in AlphaGO (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016) and AlphaZero (Silver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), which achieve super-human performance in Go and chess, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' MCTS finds optimal decisions by recursively sampling new decisions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', making a move in chess, or selecting an operation for an architecture in NAS), running stochastic rollouts to obtain the reward (such as winning a chess game, or discovering a high-performing architecture) and then backpropagating to update the weight of the initial decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Across iterations, the algorithm builds a decision tree to bias the search towards more promising regions by balancing exploration and exploitation in decision making (Browne et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' MCTS was first applied to NAS by Negrinho and Gordon (2017) who represented the search space and its hyperparameters using a modular language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This results in a tree- structured, extensible search space, contrary to the fixed search spaces of prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wis- tuba (2018) introduced a similar method but with two different UCT (Upper Confidence bounds applied to Trees) algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' MCTS was first adapted to cell-based search spaces by using a state-action representation (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The authors also improved sample efficiency by using a neural network to estimate the accuracy of sampled architectures, thus enabling a higher number of rollouts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This was followed up by adding further efficiency in pruning the tree by learning partitionings (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b), and by application to multi-objective NAS (Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One-Shot Techniques Throughout Section 3, we have seen that the predominant methodology in the early stages of NAS research was to iteratively sample architectures from the search space, train them, and use their performance to guide the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The main drawback of these methods, when applied without speedup techniques, is their immense computational cost, sometimes on the order of thousands of GPU days (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zoph and Le, 2017) due to the need to train thousands of architectures independently and from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 As an alternative, one-shot techniques were introduced to avoid training each architec- ture from scratch, thus circumventing the associated computational burden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As of 2022, they are currently one of the most popular techniques in NAS research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rather than train- ing each architecture from scratch, one-shot approaches implicitly train all architectures in the search space via a single (“one-shot”) training of a hypernetwork or supernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A hypernetwork is a neural network which generates the weights of other neural net- works (Schmidhuber, 1992), while a supernetwork (often used synonymously with “one-shot 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the other hand, recent developments in performance estimation and speed-up techniques (Section 5) have significantly improved the computational overhead of methods that use black-box optimization as a base, making these methods affordable for many applications and users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 15 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter All operation candidates Supernet … Subnet Figure 5: A supernet comprises all possible architectures in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Each archi- tecture is a subnetwork (subgraph) in the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' model” in the literature) is an over-parameterized architecture that contains all possible ar- chitectures in the search space as subnetworks (see Figure 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The idea of a supernetwork was introduced by Saxena and Verbeek (2016) and was popularized in 2018 by works such as Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018), Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018), and Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Once a supernet is trained, each architecture from the search space can be evaluated by inheriting its weights from the corresponding subnet within the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The reason for the scalability and efficiency of supernets is that a linear increase in the number of candidate operations only causes a linear increase in computational costs for training, but the number of subnets in the supernet increases exponentially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, supernets allow us to train an exponential number of architectures for a linear compute cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A key assumption made in one-shot approaches is that when using the one-shot model to evaluate architectures, the ranking of architectures is relatively consistent with the ranking one would obtain from training them independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The extent to which this assumption holds true has been substantially debated, with work showing evidence for (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) and against (Pourchot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b) the claim across various settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The validity of the assumption is dependent on the search space design, the techniques used to train the one- shot model, and the dataset itself, and it is hard to predict to what degree the assumption will hold in a particular case (Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While the supernet allows quick evaluation of all architectures, we must still decide on a search strategy, which can be as simple as running a black-box optimization algorithm while the supernet is training (such as in Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018)) or after the supernet is trained (such as in Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We discuss these families of techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A popular line of work uses gradient descent to optimize the architecture hyperparameters in tandem with training the supernet (such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c) and numerous subsequent methods).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We discuss this family of techniques in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3, we discuss hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Figure 6 provides a taxonomy of one-shot families.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 16 Neural Architecture Search: Insights from 1000 Papers Hypernetwork Methods e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' SMASH, GHNN Non-Differentiable Optimization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' OFA Supernetwork Methods e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS, OFA DARTS “fixes”: Operation Biases e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS-PT Rank Disorder e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' SGAS High Memory e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PC-DARTS Poor Generalization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Robust-DARTS Differentiable Optimization e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS One-Shot Methods Figure 6: A taxonomy of the predominant one-shot families.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A hypernetwork is a neural net which generates the weights of other neural nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A supernetwork is an over- parameterized neural net that contains the set of neural nets from the search space as subnetworks, and it can be used with differentiable optimization (including DARTS and follow-ups), or non-differentiable optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Non-Differentiable Supernet-Based Methods We start by describing supernet-based methods which do not make use of differentiable optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Some methods in this family decouple the supernet training and architecture search: first train a supernet, and then run a black-box optimization algorithm to search for the best architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other methods train a supernet while simultaneously running a non-differentiable search algorithm, such as reinforcement learning, to select subnetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018), Li and Talwalkar (2019), and Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020b) propose simple methods to train the supernet and then use a black-box optimization algorithm to extract the best architecture from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018) construct the supernet by creating a separate node corresponding to an operation, in every place where there is a choice of operation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' they then train the supernet as if it were a standard neural net, with one exception: nodes are randomly dropped during training, with the level of dropout increasing linearly throughout training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In follow-up work, Li and Talwalkar (2019) and Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020b) take this idea a step further: in each training step, they randomly sample one architecture and only update the weights of the supernet corresponding to that architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These techniques better mimic what is happening at evaluation time: only a subnetwork is evaluated rather than the entire supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, these procedures use significantly less memory than training all the weights of a supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Each method concludes by using the trained supernet to quickly evaluate architectures when conducting random search (Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Li and Talwalkar, 2019) or evolutionary search (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The architecture identified in the end is then trained from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As will be discussed in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2, deploying neural nets in practice often comes with constraints on latency or memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While the supernets considered thus far tend to only contain architectures of approximately the same size, Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) propose a supernet containing subnetworks of various sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This Once-for-all (OFA) approach uses a progres- sive shrinking strategy which starts by sampling the largest subnetworks, and then moving 17 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Algorithm 4 DARTS - Differentiable Architecture Search Input: Search space A, number of iterations T, hyperparameter ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Randomly initialize a one-shot model based on A with weights w and architecture hy- perparameters α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' for t = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , T do Perform a gradient update on the architecture weights α according to Equation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Perform a gradient update on w according to ∇wLtrain(w, α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' end for Output: Derive the final architecture by taking the argmax of α, across all operation choices, and then retrain this architecture from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' to smaller subnetworks, in order to minimize the co-adaptation among subnetworks and effectively train networks of different sizes “once for all”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In a subsequent search phase, architectures are selected based on different constraints on latency and memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) uses random search for this search phase, Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020b) proposed to improve this approach further by using evolutionary search in the search phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One of the earliest supernet-based approaches is ENAS (Efficient Neural Architecture Search) (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), which trains the supernet while running a search algorithm in tandem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Specifically, the search strategy is similar to the RL controller-based approach from Zoph and Le (2017) (described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2) but estimates the performance of each architecture using a supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The training procedure alternates between selecting an archi- tecture, evaluating it, and updating the weights of the supernet, and updating the weights of the controller by sampling several architectures to estimate the reward of REINFORCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While this approach searches for an architecture in tandem with training the supernet, it uses a separate controller network to guide the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the next section, we discuss methods which conduct the search via gradient descent using only the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Differentiable Supernet-Based Methods In this section, we review supernet-based NAS methods that employ differentiable optimiza- tion techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We first describe the seminal DARTS (Differentiable Architecture Search) approach by Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019c), and then we move to various follow-up works and other differentiable approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The DARTS approach uses a continuous relaxation of the discrete architecture search space, which enables the use of gradient descent in order to find a high-performing local optimum significantly faster than black-box optimization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It can be applied to any DAG-based search space which has different choices of operations on each edge by using a “zero” operation to simulate the absence of an edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' At the start, each edge (i, j) in the DARTS search space consists of multiple possible candidate operations o, each of which are associated with a continuous hyperparameter α(i,j) o ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While the supernet is training, edge (i, j) consists of a mix of all candidate operations, weighted by each α(i,j) o .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The architecture hyperparameters α are optimized jointly with the supernet model weights w via alternating gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In particular,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' in order to update the architecture weights α via gradient descent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS makes use of 18 Neural Architecture Search: Insights from 1000 Papers Joint Optimization of Weights and Architecture Hyperparameters Operation candidates Output Input … x N Discretization Output Input … x N Randomly Initialized Architecture Hyperparameters Output Input … x N Re-training From Scratch … x > N Input Output Figure 7: Differentiable one-shot NAS algorithms have four main steps: randomly initializ- ing the architecture hyperparameters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' optimizing the architecture hyperparame- ters and weights via alternating gradient descent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' discretizing the optimized archi- tecture hyperparameters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' and re-training the resulting subnetwork from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' the following approximation: ∇αLval (w∗(α), α) ≈ ∇αLval (w − ξ∇wLtrain(w, α), α) , (1) where Ltrain denotes the training loss, Lval denotes the validation loss, ξ is the learning rate, and w∗(α) denotes the weights that minimize the training loss of the architecture corresponding to α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In other words, in order to avoid the expensive inner optimization, w∗(α) is approximated by a single step of gradient descent (w − ξ∇wLtrain(w, α)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is similar to MAML (Finn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017) and other works (Luketina et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Metz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although this strategy is not guaranteed to converge, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019c) showed that it works well in practice with a suitable choice of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' After the training phase, DARTS obtains a discrete architecture by selecting the operation with the maximum value of α on each edge (the discretization step) and then re-trains it from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Figure 7 provides an illustration of DARTS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS gained significant attention in the AutoML community due to its simplicity, its novelty, and the release of easy-to-use code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, the original technique left room for improvement across various axes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Consequently, there has been a large body of follow-up work seeking to improve various parts of the DARTS approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the rest of the section, we cover the main categories of improvements (see Figure 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Rank Disorder As mentioned at the start of Section 4, nearly all one-shot methods make a key assumption: the ranking of architectures evaluated with the supernet is relatively consistent with the ranking one would obtain from training them independently;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' when this assumption is not 19 VαLval (w*(α), α) ~ VαLval (w - $VwLtrain(w, α), α)(i,j) maxWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter met, it is known as rank disorder (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While there is considerable debate both for (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) and against (Pourchot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b) the assumption, many works have attempted to reduce the problem of rank disorder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Several methods propose to gradually increase the network depth, or to gradually prune the set of operation candidates during training, showing that this causes the weights to better adapt to the most-promising operation choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Progressive-DARTS (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a) gradually increases the network depth while simultaneously pruning the operations with the smallest weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' SGAS (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a) chooses operations throughout the train- ing procedure, based on two criteria: selection certainty (calculated via the entropy of the operation distribution) and selection stability (calculated via the movement of the operation distribution).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, XNAS (Nayman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) makes use of the exponentiated gradi- ent algorithm (Kivinen and Warmuth, 1997), which dynamically prunes inferior operation choices during the search while also allowing the recovery of “late bloomers”, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', operation choices which only become accurate later in the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Operation Biases Several works show that differentiable NAS techniques tend to favor skip connections over other operation choices (Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a), which might be caused by the supernet using skip connections to over-compensate for vanishing gradients (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Various methods have been proposed to fix this bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DARTS+ (Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) proposes an early stopping method based on the stability of the ranking of the architecture weights, while DARTS− (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) separates the skip connection weights from other operation weights via auxiliary edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' FairDARTS (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) sets all operation weights independent of all others, and then pushes these architecture weights toward zero or one in the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Taking a different approach, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) show that it is okay for skip connections to have higher weights, as long as we do not select the final architecture based on these weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Instead, after training the supernet, their algorithm, DARTS-PT, selects each operation whose removal has the largest decrease of accuracy in the supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rather than fixing the biases among a small hand-picked set of operations, Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) instead use a search space that significantly reduces human bias: they fix a standard convolutional network and search for the kernel sizes and dilations of its operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This simple approach is broadly applicable across computer vision, PDE solving, protein folding, and other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In order to make one-shot training more efficient, their algorithm, DASH, computes the mixture-of-operations using the Fourier diagonalization of convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Poor Test Generalization Several works seek to improve the generalization performance of DARTS through various means.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020a) and Chen and Hsieh (2020) show that DARTS often converges to sharp local minima in the loss landscape (high validation loss curvature in the architecture hyperparameter space), which, after running the discretization step, can cause the algo- rithm to return an architecture with poor test generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Robust-DARTS (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a) fixes this issue by making the training more robust through data augmentation, L2 20 Neural Architecture Search: Insights from 1000 Papers regularization of the inner objective Ltrain, and early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Similarly, rather than op- timizing the training loss, Smooth-DARTS (Chen and Hsieh, 2020) optimizes the expected or worst-case training loss over a local neighborhood of the architecture hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Taking a different approach, GAEA (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c), XD (Roberts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), and StacNAS (Guilin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) all use a single-level optimization rather than the typical bi-level optimization, by treating the architecture hyperparameters as normal architecture weights, showing this leads to better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, GAEA re-parameterizes the architecture parameters over the simplex and updates them using the exponentiated gradient algorithm (similar to XNAS from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1), showing this is better-suited to the underlying geometry of the architecture search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, Amended-DARTS (Bi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) and iDARTS (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a) both take the approach of deriving more accurate approximations of the gradients of α (Equation 1), showing that this leads to a more stable optimization and better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4 High Memory Consumption The memory required to train a supernet is much higher than a normal neural net—it scales linearly with the size of the set of candidate operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Recall from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 that multiple works reduced this memory by, in each training step, masking out all operations except for the ones corresponding to one or a few subnetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Various works have proposed techniques to mask out operations for differentiable NAS as well, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', while simultaneously optimizing the architecture hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) proposed ProxylessNAS, which solves this problem by modifying the BinaryConnect (Courbariaux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015) discretization method: in each training step, for each operation choice, all are masked out except one operation that is randomly chosen with probability proportional to its current value of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) show that this procedure converges to a single high-performing subnetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' GDAS (Dong and Yang, 2019) and DSNAS (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) use a Gumbel-softmax distribution over a one-hot encoding of the operation choices, which is a different way to allow sampling single operations in each training step while maintaining differentiability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PC-DARTS (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b) proposes a relatively simpler approach: at each training step, and for each edge in the DAG, a subset of channels is sampled and sent through the possible operations, while the remaining channels are directly passed on to the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While reducing memory due to training fewer channels, this also acts as a regularizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' DrNAS (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021f) also reduces memory consumption by progressively increasing the number of channels that are forwarded to the mixed operations, and progressively pruning operation choices, modeled by a Dirichlet distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Hypernetworks A hypernetwork is a neural network which generates the weights of other neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hypernetworks were first considered by Schmidhuber (1992, 1993), and the first modern application was by Ha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2017), who used them to obtain better weights for a fixed LSTM architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hypernetworks have since been used for a variety of tasks, including HPO (Mackay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Navon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), calibrating model uncertainty (Krueger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), and NAS (Brock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 21 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter The first work to use hypernetworks for NAS (and among the first to use a one-shot model for NAS) was SMASH (one-Shot Model Architecture Search through Hypernetworks) (Brock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' SMASH consists of two phases: first, train a hypernetwork to output weights for any architecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Next, randomly sample a large set of architectures, generate their weights using the hypernetwork, and output the one with the best validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The hypernetwork, a convolutional neural net, takes as input an architecture encoding and outputs a set of weights for that architecture, and is trained by randomly sampling an architecture, generating its weights, computing its training error, and then backpropagating through the entire system (including the hypernetwork weights).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Another hypernet-based NAS algorithm is GHN (Graph Hypernetworks) (Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The main difference between SMASH and GHN is the architecture encoding and the architecture of the hypernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Specifically, the GHN hypernetwork is a mix between a graph neural network and a standard hypernetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It takes as input the computational graph of an architecture a and uses message-passing operations which are typical in GNNs, to output the weights of a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The training of the hypernetwork, and the final NAS algorithm, are both the same as in SMASH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Speedup Techniques In this section, we cover general speedup techniques for NAS algorithms, including per- formance prediction (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1), multi-fidelity methods (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), meta-learning ap- proaches (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3), and weight inheritance (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Performance Prediction A large body of work has been devoted to predicting the performance of neural networks before they are fully trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Such techniques have the potential to greatly speed up the runtime of NAS algorithms, since they remove the need to fully train each architecture under consideration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These speedup techniques can improve nearly all types of NAS algorithms, from black-box optimization (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c) to one-shot NAS (Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this section, we discuss the performance prediction techniques themselves, while in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2, we discuss methods of incorporating them into NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Formally, given a search space A and architecture a ∈ A, denote the final validation accuracy obtained with a fixed training pipeline as f(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A performance predictor f′ is defined as any function which predicts the accuracy or relative accuracy of architectures, without fully training them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In other words, evaluating f′(a) takes less time than evaluating f(a) , and {f′(a) | a ∈ A} ideally has high correlation or rank correlation with {f(a) | a ∈ A} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the rest of this section, we give an overview of different types of performance predictors, including learning curve extrapolation (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1), zero-cost proxies (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), and other methods (Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Note that surrogate models (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4) and one-shot models (Section 4) can also be seen as types of performance predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Learning Curve Extrapolation Learning curve extrapolation methods seek to predict the final performance of a given architecture after partially training it,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' by extrapolating from its so-called partial learning 22 Neural Architecture Search: Insights from 1000 Papers Learning Curve Extrapolation Zero-Cost Proxies Subset Selection Data Weibull Log log linear Log power Janoschek Epochs Accuracy Figure 8: Illustration of the main types of performance predictors: extrapolating the vali- dation accuracy learning curve via a parameteric model (left),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' assessing the gen- eralizability of an architecture with a single forward pass of a single minibatch of data (middle),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' and training the architeture on a subset of the data (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' curve (the series of validation accuracies at all epochs so far).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This can, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', be accomplished by fitting the partial learning curve to a parametric model (Domhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015) (see Figure 8 (left)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning curve extrapolation methods can also be used together with a surrogate model: in that case, the model takes as input both an encoding of a and a partial learning curve of a, and outputs a prediction f′(a) (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning curve extrapolation methods can be used to speed up black-box NAS algorithms (Domhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b) or in conjunction with multi- fidelity algorithms such as Hyperband or BOHB (described in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Zero-Cost Proxies Zero-cost proxies are a recently developed family of performance prediction techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The idea is to run a very fast computation (such as a single forward and backward pass of a single minibatch of data) over a set of architectures that assigns a score to each architecture, with the hope that the scores are correlated with the final accuracies (Mellor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These techniques get their “zero-cost” name since the overall time to score each architecture is negligible (often less than 5 seconds) compared to most other performance prediction techniques (Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While most zero-cost proxies compute architecture scores from a (single) minibatch of data, some are data-independent, computing the score solely from the initialized weights or number of parameters of the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zero-cost proxies were first introduced by Mellor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021), who estimated the relative performance of neural networks based on how well different linear regions of the network map are separated (see Figure 8 (middle)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Since the initial technique, several new zero- cost proxies have been introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Abdelfattah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) made a connection to the pruning-at-initialization literature (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Theis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a) and used this connection to introduce five zero-cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Their best- performing method, synflow (Tanaka et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), is a data-independent method which 23 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter computes the L1 path-norm of the network: it computes the sum of the product of all initialized weights in each path connecting the input to the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Since then, two other data-independent methods have been introduced, based on a series of synthetic proxy tasks to test scale invariances and spatial information (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021d), and based on approximating the neural network as a piecewise linear function (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other data-dependent methods make use of the neural tangent kernel (NTK) (Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), based on approximating its trace norm (Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) or approximating its spectrum (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although zero-cost proxies have received significant attention since they were first in- troduced, recent work has shown that simple baselines such as “number of parameters” and “FLOPs” are surprisingly competitive with all leading techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The main downsides of using zero-cost proxies are that they may be unreliable, especially on larger search spaces (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' They also may have biases, such as preferring larger models (Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) or wide channels (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022), although the biases can be removed (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the other hand, recent work encourages the viewpoint that zero-cost proxies are “weak learners” which can be combined with other techniques, including other zero-cost proxies, to improve performance (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Initial work shows that zero-cost proxies can be successfully added to both Bayesian optimization- based NAS (Shen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c) and one-shot NAS (Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Other Low-Fidelity Predictions Beside training for fewer epochs, other works give a low-fidelity estimate of the final accuracy by training on a subset of the training data (or a smaller, synthetically generated dataset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is visualized in Figure 8 (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multiple works have studied different subset selection algorithms, such as random sam- pling, entropy-based sampling (Na et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), clustering via core-sets (Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), facility location (Prasad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022), and k-center (Na et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Prasad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) introduce adaptive subset selection to NAS, in which the subset is updated throughout training in order to maximize validation accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Such et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) introduce generative teaching networks which use a small set of syn- thetic data to train neural networks much faster than using the original real training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The synthetic data is created using a data-generating network to match the accuracy of a network trained on real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A related method is synthetic petri dish (Rawal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), which evaluates architecture motifs by placing them into a small neural network and then training them using a small synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This latter method also explicitly optimizes the correlation between architecture rankings with the approximation and the full training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Multi-Fidelity Algorithms While the previous section was devoted to methods of predicting the performance of neural networks, now we cover algorithms that use these methods to run NAS efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Formally, the objective function f : X −→ R, which is typically expensive to fully eval- uate, can be cheaply approximated by a lower-fidelity version ˆf(·, b) of f(·), parameterized by the fidelity parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' When b = bmax, we retrieve the true function f(·) = ˆf(·, bmax).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 24 Neural Architecture Search: Insights from 1000 Papers This is a generalization of the definition from Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The fidelity parameter can denote the number of training epochs, training data subset size, and it can make use of perfor- mance prediction techniques from the previous section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One can even use multiple fidelity parameters at a time (Kandasamy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Next, we describe the optimization algorithms that exploit access to multi-fidelity function estimates ˆf(·, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' SuccessiveHalving (SH) (Jamieson and Talwalkar, 2016) is one of the simplest multi- fidelity algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It starts to train a large number of architectures, slowly killing off more and more architectures which are not promising based on lower fidelity evaluations, until only the most promising architectures are evaluated at the highest fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The fidelity thresholds and number of architectures to promote to higher fidelities are controlled by a hyperparameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A popular improvement to SH is Hyperband (HB) (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), a multi-armed bandit strategy that repeatedly calls SH as a subroutine, using different values of the minimum budget for each call.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, HB hedges its bets against any single choice of the minimum budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While SH and HB are purely based on (smart) random search, recent works have com- bined HB with both Bayesian optimization and evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bayesian optimization hyperband (BOHB) (Falkner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lindauer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) works similarly to HB in its first iter- ation, and on later iterations it fits a probabilistic surrogate model for each fidelity in order to make informed sampling decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Similarly, DEHB (Mallik and Awad, 2021) combines differential evolution (Storn and Price, 1997) with HB, significantly improving the later iterations of HB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ASHA (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020c) and ABOHB (Klein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) improve SH and BOHB further, respectively, by making use of massively parallel asynchronous computation and early stopping strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, EcoNAS (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) proposes a hierarchi- cal evolutionary search method that partitions the search space into subsets and allocates increasing fidelities to the most promising architectures in each subset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Meta-Learning A majority of NAS approaches consider solving a single task from scratch, ignoring previ- ously explored solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, this is in contrast to what both researchers and prac- titioners typically do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Often, architectures are transferred across datasets and even across tasks, and on a new task, researchers typically start with a state-of-the-art solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' So, one might ask: why run NAS from scratch rather than re-using information from, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', pre- vious experiments?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This question naturally leads to the idea of meta-learning or learning to learn (Hochreiter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Schmidhuber, 1987;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thrun and Pratt, 1998), which aims at improving a learning algorithm by leveraging information from past, related experiments (Hospedales et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vanschoren, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018) and Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) employ meta-learning strategies in a more general automated machine learning setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Since the focus is not on NAS, they both solely consider a small set of candidate architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018), tasks are encoded in a similar fashion as word embeddings in NLP (Mikolov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In contrast, Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) simply warm-start their search based on previously well-preforming configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) and Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) focus on few-shot learning: the problem of learning a new task with just a few data points for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The authors extend gradient- based, model-agnostic meta-learning approaches such as MAML (Finn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017) and 25 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter REPTILE (Nichol et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018) to not only meta-learning an initial set of weights for a fixed neural network architecture, but also to the architecture itself by incorporating a differentiable method such as DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c) into the meta-learning algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The work by Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) is neither restricted to few-shot learning nor to choosing architectures from a small set of candidates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rather, they employ typical NAS search spaces such as the ones discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The authors propose a novel set encoder to improve upon deep sets (Zaheer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017) and set transformers (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A graph neural network-based decoder is employed to generate neural architectures given a set encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Additionally, a graph neural network is employed to encode generated architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The architecture encoding in combination with the set encoding is then used to meta-learn a surrogate model to predict the performance of the architecture, dataset tuple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shala et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) extend the work by Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) by employing the dataset and architecture encodings within a Bayesian optimization framework, resulting in a probabilistic surrogate predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This further enables adapting the surrogate to datapoints seen at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4 Weight Inheritance and Network Morphisms While black-box optimization-based NAS algorithms train each architecture from scratch, and one-shot methods train all architectures with the same set of weights, a line of work proposes an in-between solution: reuse the weights of trained architectures on similar un- trained architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This idea is especially helpful for black-box optimization approaches that apply only small, sequential changes to architectures when generating a new candidate architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2017) propose to copy the weights of all layers that have not been affected by applied mutations from the parent architecture to its offspring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This idea has also been extended by the concept of network morphisms (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Network morphisms are operators acting on the space of neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' They change the architecture of a neural network without changing the function they represent, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', given an arbitrary input, the output remains identical for the original architecture and the architecture having been modified by a network morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This is typically achieved by properly initializing the modified architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Network mor- phisms have been employed in evolutionary algorithms (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017, 2019a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Schorn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wistuba, 2019), reinforcement learning (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018a,b), Bayesian opti- mization (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b), and even one-shot methods (Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Extensions The previous sections studied the main techniques from the classic instantiation of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this section, we survey a few common extensions: joint NAS + HPO, constrained/multi- objective NAS, and neural ensemble search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Joint NAS + HPO While a large body of the NAS literature assumes fixed hyperparameters in their experimen- tal setup, it has been shown – perhaps not very surprisingly – that hyperparameters also play a significant role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, on the DARTS search space, tuning hyperparameters can lead to a huge improvement, exceeding the performance gains obtained by NAS (Yang 26 Neural Architecture Search: Insights from 1000 Papers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, the best hyperparameters may vary significantly across architectures even in the same search space (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, a recent body of work seeks to overcome these challenges and give efficient algorithms for NAS + HPO (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Running joint NAS + HPO is significantly more challenging than running NAS or HPO in isolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' First, the complexity of the search space is substantially increased, due to the increased number of hyperparameters and the heterogeneity of the hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sec- ond, the interaction between architectures and training hyperparameters in terms of network performance is difficult to model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, some hyperparameters can have different effects on the performance under different evaluation budgets, reducing the effectiveness of many multi-fidelity and performance prediction techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In light of these challenges, several solutions have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Various methods have been introduced to homogenize the search space, such as reformulating NAS as an HPO problem with categorical hyperparameters (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), or standardizing the repre- sentation of the NAS and HPO hyperparameters by assigning continuous-valued coefficients in [0, 1] (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The search strategies resemble standard NAS algorithms such as BO (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), evolution (Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), or REINFORCE with weight sharing (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Constrained and Multi-Objective NAS Although NAS has been very popular in recent years, most work focuses on solely optimizing for a single objective, typically the accuracy or error rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, there are many settings for which this is not sufficient, such as when the neural network must be deployed on an edge device or must satisfy a legal definition of fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In such applications, we may need to constrain the latency, memory usage, or rate of errors across classes (Sukthanker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' There has been particular interest in constraints related to edge devices and other hardware, termed hardware-aware NAS (Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' To achieve one or more objectives in addition to accuracy, the standard NAS objective is typically modified to either a constrained optimization problem (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019)) or a multi-objective optimization problem (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019, 2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In constrained optimization, one tries to solve the following equation: min a∈A f(a) subject to hi(a) ≤ ci for i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , k} (2) where f(a) denotes, as before, the original objective function (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', validation error), and hi represent hardware constraints as a function of the architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This problem is often solved by a transform into an additive or multiplicative unconstrained problem such as mina∈A f(a)+� i λigi(a) with penalty functions gi penalizing architectures not satisfying the constraints, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', gi(a) = max � 0, hi(a)−ci � and hyperparamters λi trading off the objectives and constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This single-objective optimization problem is then solved using black-box optimization methods or one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the latter case, the penalty functions gi needs to be differentiable, which is often not the case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, discrete metrics such as latency are relaxed to continuous variables through various techniques, such as with a Gumbel softmax function (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 27 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter In multi-objective optimization, the requirements in Equation 2 are treated as separate objectives that are optimized along with the original objective: min a∈A � f(a), h1(a), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , hk(a) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While this can again be reduced to a single-objective problem via scalarization methods, another common approach is to search for a set of non-dominated solutions that are op- timal in the sense that one cannot reduce any objective without increasing at least one other objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The set of non-dominated solutions is called the Pareto front.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The most common approach in this case is to employ multi-objective evolutionary algorithms which maintain a population of architectures and aim to improve the Pareto front obtained from the current population by evolving the current population (Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Izquierdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multi-objective evolutionary algorithms have also been used in combination with weight sharing within one-shot models (Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mu˜noz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One of the most widely-studied constrained NAS problems is regarding hardware effi- ciency such as memory or latency, and many works have been devoted to efficiently approx- imating hardware metrics of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While simple metrics such as number of parameters are easily computed, these are often not correlated enough with other metrics of interest such as memory or latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other solutions include computing hardware costs modularly as the sum of the hardware cost of each operation (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) or by using a surrogate model that predicts hardware costs (Dudziak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Laube et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Neural Ensemble Search While the goal of neural architecture search is to return the best standalone architecture, ensembling methods are popular within the deep learning community for their robust pre- dictions and their easy uncertainty quantification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A newly emerging extension of NAS is concerned with finding the best ensemble of neural networks with diverse architectures, which can outperform standard NAS in terms of accuracy, uncertainty calibration, and robustness to dataset shift (Zaidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural ensemble search is defined as follows: min a1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=',aM∈ALval (Ensemble ((w∗(a1), a1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , (w∗(aM), aM))) (3) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' w∗(a) = argminw Ltrain (w, a) ∀a ∈ A, where Ensemble is the function which aggregates the outputs of f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' , fM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Note that the search space cardinality is |A|M rather than |A| as in standard NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zaidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) propose two simple yet effective procedures based on random search and regularized evolution (Real et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) that search for architectures that optimize Equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Despite their effectiveness, these algorithms take considerable computation due to the black-box nature of the optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multi-headed NES (Narayanan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) circumvents this issue by applying differentiable NAS methods on the heads of a multi-headed network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The heads are explicitly tuned to optimize the ensemble loss together with a diversity component that encourages uncorrelated predictions coming from the individual heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other works have set up neural ensemble search with a one-shot model for the entire architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NESBS (Neural Ensemble Search via Bayesian Sampling) 28 Neural Architecture Search: Insights from 1000 Papers (Shu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) propose to use a supernet to estimate the ensemble performance of inde- pendently trained base learners and then use Bayesian sampling to find a high-performing ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NADS (Neural Architecture Distribution Search) (Ardywibowo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) follows a similar line by training a supernet to optimize an objective that is tailored to provide better uncertainty estimates and out-of-distribution detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021b) run evolutionary search on the supernet to find a high-performing ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Applications Along with discovering improved architectures for well-known datasets, one of the primary goals of the field of NAS is to quickly and automatically find high-performing architectures for brand new datasets and tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although the majority of the NAS literature focuses on image classification, there are numerous success stories for NAS applied to less well- known settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this section, we discuss a few of these successes, including graph neural networks, generative adversarial networks, dense prediction, and transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Graph Neural Networks Graph neural networks (GNNs) are designed to process data represented by graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Using NAS to design GNNs poses unique problems: the search space for GNNs is more complex than typical convolutional search spaces, and both NAS and GNNs are independently known for their large computational overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) initiated a line of work applying NAS to GNNs by defining a new search space with GNN-specific operations and then using a reinforcement learning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Follow-up work designed similar search spaces (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' with specialized features such as meta-paths (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b), edge features (Jiang and Balaprakash, 2020), or fast sampling operations (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Overall, the main difference between NAS for GNNs and more standard NAS settings lies in the construction of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The main search strategies used by GNN NAS algorithms are typical NAS approaches: reinforcement learning (Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), one-shot methods (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b), and evolutionary algorithms (Jiang and Balaprakash, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nunes and Pappa, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For a detailed survey on NAS for GNNs, see Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Generative Adversarial Network Generative adversarial networks (GANs) (Goodfellow et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2014) are a popular choice for generative modeling in tasks such as computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' GANs make use of two separate networks training in tandem: a generator and a discriminator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Due to having two separate networks, and their notoriously brittle training dynamics (Gulrajani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017), GANs require special techniques for effective NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Different works have achieved improved performance via NAS by searching for only the generator architecture with a fixed discriminator (Doveh and Giryes, 2021), with a predefined progressively growing discriminator (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), or by searching both the generator and discriminator architectures simultaneously (Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The most popular choice of search space is the cell-based search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The cell for the generator 29 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter consists of a standard convolutional cell, with the addition of various upsampling operations (Ganepola and Wirasingha, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The search techniques resemble the techniques used for standard NAS: reinforcement learning (Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wang and Huan, 2019), one-shot NAS (Doveh and Giryes, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lutz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018), and evolutionary algorithms (Kobayashi and Nagao, 2020), with scoring based on either Inception Score (IS) (Salimans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016) or Fr´echet Inception Distance (FID) (Heusel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For a comprehensive survey on NAS for GANs, see Ganepola and Wirasingha (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Dense Prediction Tasks Dense prediction for computer vision encompasses a variety of popular tasks such as seman- tic segmentation, object detection, optical flow, and disparity estimation, and it requires more complex architectures compared to standard image classification problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For ex- ample, the architectures often include a decoder (Ronneberger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015), modules for generating multi-scale features (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015) or task-specific heads (Girshick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2014) in addition to the main network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thus, NAS algorithms have been applied to search for these components, either in isolation (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a) or jointly (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), or by discovering novel design patterns (Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For a survey on NAS for dense prediction, see Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Once again, standard NAS techniques are used: Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Saikia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019a) employ gradient-based search via DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) use RL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) is inspired by ProxylessNAS (Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) and ENAS (Pham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Methods for dense prediction tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', Bender et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shaw et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019a)) typically build search spaces based on state-of-the-art image classification networks, with task-specific components from well-performing dense prediction architecture components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As many approaches fix the backbone and only search for other task-specific components of the architecture, they often employ pre-trained backbone architectures (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a) or even cache the features generated by a backbone (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nekrasov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020c) to speed up architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2018);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ghiasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) also use a down-scaled or different backbone architecture during the search process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Methods also sometimes employ multiple search stages, with the goal of first eliminating poorly performing architectures (or parts of the search space) and successively improving the remaining architectures (Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Overall, while it is much harder to run NAS on dense prediction tasks compared to image classification tasks because of the computational demands of dense prediction, there has been a rapid increase in developments with the rise of computationally efficient one-shot NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While efforts thus far have focused on semantic segmentation and object detection, avenues for future work include disparity estimation, panoptic segmentation, 3D detection and segmentation, and optical flow estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 30 Neural Architecture Search: Insights from 1000 Papers 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4 Transformers Transformers were proposed by Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2017) to help with the issue of longer se- quences that RNNs had difficulty modeling, by using self-attention and cross-attention mechanisms such that each token’s representation in an input sequence is computed from a weighted average of the representation of all other tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The core transformer design was introduced for machine translation, but it has found widespread usage in causal language modeling (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019), masked language modeling (Clark et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019d), and more recently, computer vision (Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Since its release, there have been many efforts to improve transformers via NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The most common search strategies for transformers are evolutionary (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019, 2021) or one-shot (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) On the other hand, there is a huge variety of different search spaces that have been tried recently, relative to other areas (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', in NAS for convolutional architectures, the majority of works use cell-based search spaces).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Overall, the field of NAS for transformers has not converged to one “best” type of search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Below, we survey NAS methods for four types of transformers: decoder-only, encoder-only, encoder-decoder, and vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' See Chitty-Venkata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) for an in-depth survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Decoder-only architectures, such as the GPT line of architectures (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) directly consume the input text prompt and output the sequence of text tokens that are most likely to follow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Primer (So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) is a NAS algorithm that makes use of evolutionary search on a large macro decoder-only search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The approach found two consistent improvements to the transformer block: squaring the ReLU in the feedforward block in the transformer layer, and adding depthwise convolutions after self-attention heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Encoder-only architectures, such as BERT (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) encode the input text into a representation which can be used for many kinds of downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multiple works (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) seek to discover compressed versions of BERT, in which the desired latency and task are specified by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The typical approach is to train a supernet on a standard self-supervised task (masked language modeling), which can then be used to discover compressed models for a given language task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Encoder-decoder architectures such as T5 (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) are used in sequence- to-sequence tasks such as machine translation, in which the source language is encoded into a representation, which is then decoded into the target language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' So et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) use evolutionary search together with a new technique to dynamically allocate more resources to more promising candidate models, while Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021b) propose a DARTS-based algorithm with a new technique for memory efficiency in backpropagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, KNAS (Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b) and SemiNAS (Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) speed up the search using zero-cost proxies and a surrogate transformer model, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A large variety of NAS algorithms have been studied for vision transformer search spaces, with the majority using one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' AutoFormer (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c) searches over vision transformer architectures and hyperparameters using a single-path-one-shot strategy (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b) and then running evolutionary search on the trained supernet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A followup work, AutoFormerv2 (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021d), automated the design of the search space itself by gradually evolving different search dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other works have improved 31 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter supernet training via gradient conflict aware training (Gong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) or channel-aware training (Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021a) and Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021a) run one-shot methods on hybrid CNN and transformer search spaces for computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Benchmarks In the early days of NAS research, the most popular metrics were the final test accuracies on CIFAR-10 and ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' This caused inconsistent search spaces and training pipelines across papers, and also drove up computational costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, it became standard to train the final architecture for 600 epochs, even though the test accuracy only increases by a fraction of a percent past 200 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Recently, queryable NAS benchmarks have helped the field reduce computation when developing NAS techniques and to achieve fair, statistically significant comparisons between methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A NAS benchmark (Lindauer and Hutter, 2020) is defined as a dataset with a fixed train-test split, a search space, and a fixed evaluation pipeline for training the architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A tabular NAS benchmark is one that additionally gives precomputed evaluations for all possible architectures in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A surrogate NAS benchmark is a NAS benchmark along with a surrogate model that can be used to predict the performance of any architecture in the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A NAS benchmark is queryable if it is either a tabular or a surrogate benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Queryable NAS benchmarks can be used to efficiently simulate many NAS experiments using only a CPU, by querying the performance of neural networks from the benchmark, rather than training them from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In the rest of the section, we give an overview of popular NAS benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' See Appendix Table 2 for a summary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The first tabular NAS benchmark was NAS-Bench-101 (Ying et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' It consists of a cell-based search space of 423 624 architectures, each with precomputed validation and test accuracies on CIFAR-10 for three different seeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A follow-up work, NAS-Bench- 1Shot1 (Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b), is able to simulate one-shot algorithms by defining subsets of the NAS-Bench-101 search space which have a fixed number of nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS-Bench-201 (Dong and Yang, 2020) is another popular tabular NAS benchmark, consisting of 6466 unique architectures, each with precomputed validation and test accuracies on CIFAR-10, CIFAR- 100, and ImageNet-16-120 for three seeds each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NATS-Bench (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b) is an extension of NAS-Bench-201 which also includes a macro search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Another extension, HW-NAS-Bench-201 (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b), gives the measured or estimated hardware cost for all architectures across six hardware devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Surr-NAS-Bench-DARTS (formerly called NAS-Bench-301) (Siems et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) was the first surrogate NAS benchmark, created by training 60 000 architecture from the DARTS (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019c) search space on CIFAR-10 and then training a surrogate model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The authors also released Surr-NAS-Bench-FBNet for the FBNet search space (Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A follow-up work, NAS-Bench-x11 (Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b), devised a technique to predict the full learning curve, allowing the validation accuracies to be queried at arbitrary epochs, which is necessary for simulating multi-fidelity NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' TransNAS-Bench-101 (Duan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) is a tabular benchmark that covers seven dif- ferent computer vision tasks from the Taskonomy dataset (Zamir et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Beyond computer vision, NAS-Bench-NLP (Klyuchnikov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) consists of an LSTM-inspired search space for NLP, and NAS-Bench-ASR (Mehrotra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) is a tabular NAS bench- 32 Neural Architecture Search: Insights from 1000 Papers mark for automatic speech recognition (Garofolo, 1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS-Bench-360 (Tu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022a) is a benchmark suite which gives NAS benchmarks on ten diverse problems such as pros- thetics control, PDE solving, protein folding, and astronomy imaging, and is search space agnostic, although three of the tasks have pretrained architectures on the NAS-Bench-201 search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, NAS-Bench-Suite (Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022) is a benchmark suite which combines the majority of existing queryable NAS benchmarks, 28 total tasks, into a single unified interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' An extension, NAS-Bench-Suite-Zero, offers precomputed zero-cost proxy values across all tasks (Krishnakumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Using queryable benchmarks allows researchers to easily simulate hundreds of trials of the algorithms with different initial random seeds, making it easy to report statistically significant comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, over-reliance on a few benchmarks can lead to the field over-fitting (Koch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Raji et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021) and is not conducive to the discovery of truly novel methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, researchers should use a large set of diverse NAS benchmarks whenever possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Best Practices The field of NAS has at times seen problems with reproducibility and fair, statistically significant comparisons among methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These issues impede the overall research progress in the field of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Recently, a few papers have laid out best practices and guidelines for conducting sound NAS research that is reproducible and makes fair comparisons (Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lindauer and Hutter, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These best practices are also available as a checklist (Lindauer and Hutter, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We encourage NAS researchers to follow the checklist and to attach it to the appendix of their papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Now, we summarize these best practices for NAS research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Releasing Code and Important Details It is nearly impossible to reproduce NAS methods without the full code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Even then, random seeds should be specified and reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, releasing easy-to-use code can lead to more follow-up methods and impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For example, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019c) released easy-to-use code for DARTS, which facilitated numerous follow-up works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' When releasing code, it is important to release all components, including the training pipeline(s), search space, hyperparameters, random seeds, and the NAS method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Many papers use different architecture training pipelines during the search and during the final evaluation, so it is important to include both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Note that using popular NAS benchmarks such as NAS-Bench-101 or NAS-Bench-201 (see Section 8) makes this substantially easier: the training pipeline is already fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS methods often have several moving parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As a result, they typically have many hyperparameters of their own that could be tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In fact, many NAS methods themselves make use of neural networks – one could even run a NAS algorithm on the NAS algorithm!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Due to this complexity, it is important to report if, or how, these hyperparameters were tuned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' When reporting results on a large set of search spaces and datasets, the best practice is to tune the hyperparameters of the NAS method on one dataset, and then fix these hyperparameters for the remaining evaluations on other datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We also note that, in general, devising NAS methods with fewer hyperparameters is more desirable, especially 33 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter because it has recently been shown that hyperparameters often do not transfer well across datasets and search spaces (Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Comparing NAS Methods When comparing NAS methods, it is not enough to use the same datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The exact same NAS benchmarks must be used: a dataset with a fixed train-test split, search space, and evaluation pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Otherwise, it is unclear whether a difference in performance is due to the NAS algorithm or the training pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Several papers have shown that simple baselines are competitive with state-of-the-art NAS algorithms (Li and Talwalkar, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ottelander et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sciuto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' When desigining a new method for NAS, it is important to compare the method with baselines such as random sampling and random search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, many NAS methods are anytime algorithms: a time budget does not necessarily need to be spec- ified upfront, and the method can be stopped at any time, returning the best architecture found so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The longer the NAS method runs, the better the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' These NAS methods should be compared on a plot of performance over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Even one-shot algorithms can be compared in this way, since the supernet can be discretized and trained at any point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We recommend that NAS researchers run thorough ablation studies to show which part(s) of the NAS method lead to the most improved performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As mentioned in the previous section, NAS methods often have several moving parts, so a clean understanding of the importance of each part and how they work together, is important to report.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, we recommend that researchers run multiple trials of their experiments and report the random seeds for each experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS methods can have high variance in the randomness of the algorithm, so running many trials is important to verify statistically significant comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Resources In this section, we discuss NAS resources including libraries (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1), other survey papers (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), and additional resources (Section 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Libraries A long line of engineering has been focused on automating machine learning pipelines: Auto- WEKA (Thornton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2013), Auto-Sklearn (Feurer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2015), TPOT (Olson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2016), and AutoGluon-Tabular (Erickson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' More recently, a special focus has been given to developing tools that can facilitate the deployment of various NAS algorithms for practitioners, such as Auto-Keras (Jin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019a), Auto-PyTorch Tabular (Zimmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), AutoGluon (Erickson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020), and NNI (Microsoft, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' To provide a toolbox for facilitating NAS research, in both developing new NAS meth- ods and applying NAS to new problem domains, various libraries have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The DeepArchitect library (Negrinho and Gordon, 2017), which separates the search space from the optimizer, was an important first step towards this direction in the NAS community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NASLib (Ruchte et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) unifies and simplifies NAS research by having a single ab- straction for one-shot and BBO algorithms, and a single abstraction for the search spaces of nearly all queryable NAS benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Archai (Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2019) also provides unified ab- 34 Neural Architecture Search: Insights from 1000 Papers stractions for one-shot and discrete NAS algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The aim for Archai is both to support reproducible rapid prototyping for NAS research as well as to be a turnkey solution for data scientists looking to try NAS on their tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PyGlove (Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) introduced a novel approach to constructing NAS methods via symbolic programming, in which the ML programs are mutable and can be manipulated and processed by other programs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Other NAS Survey Papers There are several older NAS survey papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019b) provides a compact introduction to NAS and introduces the “three pillars” of NAS: search space, search strategy, and performance evaluation strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The survey by Wistuba et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019) provides a more comprehensive view of the landscape of NAS research, unifying and categorizing existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020) gave a layout that focused on the historical challenges in the field of NAS, as well as the solutions found to remedy these challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Other surveys have been released which focus on a specific sub-area of NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021a) focus on evolutionary NAS, Benmeziane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021) focus on hardware-aware NAS (HW-NAS), Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021b) survey AutoML (with a NAS focus) on graphs, Elsken et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) survey NAS for dense prediction in computer vision, and Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021), Santra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2021), and Cha et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2022) all survey one-shot NAS methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, there are more survey papers with a broader focus such as automated machine learning (AutoML) or automated deep learning (AutoDL), which devote a section to NAS (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kedziora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yu and Zhu, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Notably, the first book on automated machine learning (which is open-access) was released in May 2019 by Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Additional Resources There are multiple long-running workshops which focus on NAS and related topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The AutoML workshop at ICML (2014-2021) and Meta-Learning workshop at NeurIPS (2017- 2022) have had a healthy overlap in attendance with the NAS community, especially over the last few years, while ICLR (2020, 2021) and CVPR (2021) have had workshops devoted solely to NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, after many years of AutoML and NAS workshops, the community has grown large enough to start the first AutoML conference: https://automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='cc/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For a continuously updated, searchable list of NAS papers, see https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' org/automl/literature-on-neural-architecture-search/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' For a continuously updated list of NAS papers published at ML venues, as well as other resources, see https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' com/D-X-Y/Awesome-AutoDL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Future Directions Neural architecture search has come a long way in the last few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The efficiency of NAS algorithms has improved by orders of magnitude, tools exist to compare NAS algorithms without GPUs, and researchers have created many novel techniques and diverse search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Architectures discovered by NAS constitute the state of the art on many tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' However, there are still many unsolved problems and promising future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In this section, we discuss a few of the most important directions for future work in NAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 35 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1 Robustness of Efficient Methods One-shot methods are one of the most popular techniques for NAS due to their orders-of- magnitude speedups over to black-box optimization techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While one-shot techniques have already seen major progress, they still face performance issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Even though many improvements of one-shot algorithms such as DARTS have been proposed (see Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2), these works generally focus on a single improvement;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' the field lacks a large-scale, fair comparison among one-shot methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, as it currently stands, applying one-shot methods to a new task requires a significant amount of expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Devising one-shot approaches that work robustly and reliably across new datasets and tasks is an important area for future study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Another more recent set of techniques that promises orders-of-magnitude speedups are zero-cost proxies (see Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Although recent work has shown that many zero-cost proxies do not consistently outperform simple baselines (Ning et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021), other work ar- gues that there is untapped potential for zero-cost proxies (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2022), especially when combined with existing NAS techniques (White et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021c;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' De- veloping a better understanding of when and why zero-cost proxies work in certain settings is an important area for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='2 Going Beyond Hand-Crafted, Rigid Search Spaces The search spaces for NAS methods are typically carefully hand-designed by human experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While carefully designing search spaces decreases search times, it also contradicts the idea of having an automated system that can be employed by non-experts, and it limits the scope of NAS to domains where strong search spaces are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, in the last few years, the most-studied type of search space by far has been the cell-based search space, which is significantly more rigid than other types of search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hierarchical search spaces offer a better trade-off between flexibility and ease of search, yet they are relatively under-explored when compared to cell-based search spaces (see Sec- tion 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, hierarchical search spaces by nature have a higher diversity when compared to cell-based search spaces, reducing the overall human bias of the search space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Optimizing search spaces in an automated manner (Ru et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b) such as starting with large, diverse search spaces and then iteratively pruning low-performing parts of the space (Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Radosavovic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020) could allow researchers to consider a significantly larger variety of architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3 Fully Automated Deep Learning Although NAS has seen a huge amount of interest, recent work has shown that on popular search spaces such as the DARTS search space, optimizing the training hyperparameters leads to a greater increase in performance than optimizing the architecture (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zela et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' While these results show that for some search spaces, optimizing hyperparameters may be more important than optimizing the architecture, the best case scenario is to optimize both hyperparameters and the architecture simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A new thread of research seeks to simultaneously optimize the hyperparameters and architecture: NAS + HPO (see Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Varying hyperparameters along with the 36 Neural Architecture Search: Insights from 1000 Papers architecture also significantly reduces human bias, making it possible to discover previously unknown combinations of architectures and hyperparameters that substantially outperform existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Therefore, while this problem is significantly more challenging than NAS or HPO alone, the potential improvements are much higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Furthermore, we do not need to stop just at NAS + HPO: we can optimize the full deep learning pipeline, including problem formulation, data processing, data augmentation, model deployment, and continuous monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In other words, the goal is to run fully auto- mated deep learning (AutoDL) (Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' As the field of NAS matures, AutoDL has the potential to play a big role in realizing substantial improvements in performance for real-world problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Acknowledgments and Disclosure of Funding This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' We acknowledge funding by European Research Council (ERC) Consolidator Grant “Deep Learning 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0” (grant no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 101045765).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Funded by the European Union.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the ERC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neither the European Union nor the ERC can be held responsible for them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 37 Fundedby the European UnionWhite, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Operation Layer/Unit/Primitive 3x3 depthwise- separable convolution Inverted bottleneck residual layer 1x1 convolution Block/Module Architecture Cell Motif hi+1 hi-1 hi Input Block/Cell 1 Output Block/Cell 2 Block/Cell K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' hi hi+1 Figure 9: NAS search space terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Operation layers/units/primitives consist of sets of 1-3 operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A block/module denotes a sequential stack of layers in chain- structured or macro search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A cell denotes a directed acyclic graph of operations (and a motif denotes a small subset of the cell).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Architecture Input CNN Block Output CNN Block CNN Block CNN Block CNN Block hi op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer hi+1 Block Depth Nlayers Expansion Ratio Kernel size Ratio Chain-Structured Search Space Where to Doubling Channels Macro Search Space Architecture Depth Nblocks Architecture Input Output op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer op.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' layer Where to Down- sampling Figure 10: Illustration of macro search space based on Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019)(left) and chain- structured search space based on Cai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2020)(right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Additional Figures and Tables For a visualization of the search space terminologies, see Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Figure 10, we show chain-structured and macro search spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Architecture encodings are illustrated in Figure 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finally, for an overview of NAS benchmarks, see Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 38 Neural Architecture Search: Insights from 1000 Papers in 1x1 out 3x3 in 1x1 out in out MP 1x1 in out 3x3 MP 1x1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One-hot 6 4 1 47 Categorical (a) (c) (b) in 1x1 out 3x3 3x3 MP 1x1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3x3 MP 1x1 1x1 3 2 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 21 in MP 3x3 1x1 3x3 1x1 out in MP 3x3 1x1 3x3 1x1 out 9 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3x3 MP 1x1 1x1 One-hot Categorical Figure 11: A neural architecture (a) can be encoded using an adjacency matrix (b) or path-based representation (c), with a one-hot or categorical encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Queryable Benchmark Size Type Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Surr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' LCs One-Shot Task #Tasks NAS-Bench-101 423k cell \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 NATS-Bench-TSS (NAS-Bench-201) 6k cell \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3 NATS-Bench-SSS 32k macro \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3 NAS-Bench-NLP > 1053 cell \x13 NLP 1 NAS-Bench-1Shot1 364k cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 Surr-NAS-Bench-DARTS (NAS-Bench-301) 1018 cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 Surr-NAS-Bench-FBNet 1021 chain \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 NAS-Bench-ASR 8k cell \x13 \x13 ASR 1 TransNAS-Bench-101-Micro 4k cell \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' CV 7 TransNAS-Bench-101-Macro 3k macro \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' CV 7 NAS-Bench-111 423k cell \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 NAS-Bench-311 1018 cell \x13 \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 NAS-Bench-NLP11 > 1053 cell \x13 \x13 NLP 1 NAS-Bench-MR 1023 cell \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' CV 9 NAS-Bench-Macro 6k macro \x13 \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 HW-NAS-Bench-201 6k cell \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3 HW-NAS-Bench-FBNet 1021 chain \x13 Image class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1 NAS-Bench-360 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' suite \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 3 NAS-Bench-Suite Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' suite \x13 \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 25 NAS-Bench-Suite-Zero Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' suite \x13 \x13 \x13 \x13 Var.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 28 Table 2: An overview of NAS benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 39 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter References Mohamed S Abdelfattah, Abhinav Mehrotra, �Lukasz Dudziak, and Nicholas Donald Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zero-cost proxies for lightweight {nas}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Abdulaziz Almalaq and Jun Jason Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolutionary deep learning-based energy con- sumption prediction for buildings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ieee access, 7:1520–1531, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Peter J Angeline, Gregory M Saunders, and Jordan B Pollack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' An evolutionary algorithm that constructs recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE transactions on Neural Networks, 5(1): 54–65, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Randy Ardywibowo, Shahin Boluki, Xinyu Gong, Zhangyang Wang, and Xiaoning Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nads: Neural architecture distribution search for uncertainty awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 356–366.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Designing neural network architectures using reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Accelerating neural archi- tecture search using performance prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Meta-Learning Workshop at NeurIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Understanding and simplifying one-shot architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Inter- national Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kinder- mans, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Can weight sharing outperform random architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' an investigation with tunas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar, Martin Wis- tuba, and Naigang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A Comprehensive Survey on Hardware-Aware Neural Architec- ture Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PhD thesis, LAMIH, Universit´e Polytechnique des Hauts-de-France, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Algorithms for hyper- parameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaifeng Bi, Changping Hu, Lingxi Xie, Xin Chen, Longhui Wei, and Qi Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stabilizing darts with amended gradient estimation on architectural parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11831, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 40 Neural Architecture Search: Insights from 1000 Papers Zal´an Borsos, Andrey Khorlin, and Andrea Gesmundo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Transfer nas: Knowledge trans- fer between search spaces with transformer agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 6th ICML Workshop on Automated Machine Learning, arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='08102, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Andrew Brock, Theo Lim, JM Ritchie, and Nick Weston.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Smash: One-shot model archi- tecture search through hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Language models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33:1877–1901, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A survey of monte carlo tree search methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Computa- tional Intelligence and AI in games, 4(1):1–43, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient architecture search by network transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Path-Level Network Transformation for Efficient Architecture Search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Han Cai, Ligeng Zhu, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proxylessnas: Direct neural architecture search on target task and hardware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Han Cai, Chuang Gan, Tianzhe Wang, Zhekai Zhang, and Song Han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Once-for-all: Train one network and specialize it for efficient deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stephen Cha, Taehyeon Kim, Hayeon Lee, and Se-Young Yun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Supernet in neural architec- ture search: A taxonomic survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='03916, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Listen, attend and spell: A neural network for large vocabulary conversational speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2016 IEEE in- ternational conference on acoustics, speech and signal processing (ICASSP), pages 4960– 4964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bo Chen, Golnaz Ghiasi, Hanxiao Liu, Tsung-Yi Lin, Dmitry Kalenichenko, Hartwig Adam, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mnasfpn: Learning latency-aware pyramid architecture for object detection on mobile devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Glit: Neural architecture search for global and local image transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 41 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12–21, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hanlin Chen, Ming Lin, Xiuyu Sun, and Hao Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS-bench-zero: A large scale dataset for understanding zero-shot neural architecture search, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='id=hP-SILoczR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, and Jon Shlens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Searching for efficient multi-scale architectures for dense image prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' One-shot neural ensem- ble architecture search by diversity-guided search space shrinking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16525–16534, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Minghao Chen, Houwen Peng, Jianlong Fu, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autoformer: Searching trans- formers for visual recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12270–12280, 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Minghao Chen, Kan Wu, Bolin Ni, Houwen Peng, Bei Liu, Jianlong Fu, Hongyang Chao, and Haibin Ling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Searching the search space of vision transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tianqi Chen, Ian J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Goodfellow, and Jonathon Shlens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Net2net: Accelerating learning via knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wuyang Chen, Xinyu Gong, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiangning Chen and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stabilizing differentiable architecture search via perturbation-based regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 1554–1565.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiangning Chen, Ruochen Wang, Minhao Cheng, Xiaocheng Tang, and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dr- nas: Dirichlet neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Progressive differentiable architecture search: Bridging the depth gap between search and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1294–1303, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Detnas: Backbone search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 42 Neural Architecture Search: Insights from 1000 Papers Krishna Teja Chitty-Venkata, Murali Emani, Venkatram Vishwanath, and Arun K Somani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search for transformers: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Access, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Ben- gio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Attention-based models for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Aristeidis Chrostoforidis, George Kyriakides, and Konstantinos Margaritis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A novel evolutionary algorithm for hierarchical neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='08484, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fair darts: Eliminating unfair advantages in differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In European conference on computer vision, pages 465–480.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun Lu, Xiaolin Wei, and Junchi Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Darts- : robustly stepping out of performance collapse without indicators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01027.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Electra: Pre-training text encoders as discriminators rather than generators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='10555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Binaryconnect: Training deep neural networks with binary weights during propagations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Advances in neural in- formation processing systems, 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dennis D Cox and Susan John.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A statistical method for global optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In [Proceedings] 1992 IEEE International Conference on Systems, Man, and Cybernetics, pages 1241– 1246.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Bichen Wu, Zijian He, Zhen Wei, Kan Chen, Yuandong Tian, Matthew Yu, Peter Vajda, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fbnetv3: Joint architecture-recipe search using predictor pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 16276–16285, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tri Dao, Nimit Sohoni, Albert Gu, Matthew Eichhorn, Amit Blonder, Megan Leszczynski, Atri Rudra, and Christopher R´e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaleidoscope: An efficient, learnable representation for all structured linear maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bert: Pre-training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of NAACL- HLT, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, and Ping Luo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hr-nas: Searching efficient high-resolution neural architectures with lightweight 43 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2982–2992, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuhui Ding, Quanming Yao, Huan Zhao, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Diffmg: Differentiable meta graph search for heterogeneous graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 279–288, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuanyi Dong and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Searching for a robust neural architecture in four gpu hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuanyi Dong and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-201: Extending the scope of reproducible neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuanyi Dong, Mingxing Tan, Adams Wei Yu, Daiyi Peng, Bogdan Gabrys, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autohas: Efficient hyperparameter and architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='03656, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuanyi Dong, David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automated deep learning: Neural architecture search is not the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='09245, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuanyi Dong, Lu Liu, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nats-bench: Benchmarking nas algorithms for architecture topology and size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11929.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sivan Doveh and Raja Giryes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Degas: differentiable efficient generator search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural Computing and Applications, 33(24):17173–17184, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le, and Xiaodan Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Spinenet: Learning scale-permuted backbone for recognition and localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yawen Duan, Xin Chen, Hang Xu, Zewei Chen, Xiaodan Liang, Tong Zhang, and Zhen- guo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Transnas-bench-101: Improving transferability and generalizability of cross-task neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5251–5260, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 44 Neural Architecture Search: Insights from 1000 Papers Lukasz Dudziak, Thomas Chau, Mohamed Abdelfattah, Royson Lee, Hyeji Kim, and Nicholas Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Brp-nas: Prediction-based nas using gcns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the An- nual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thomas Elsken, Jan-Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Simple and efficient architecture search for convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='04528, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient multi-objective neural ar- chitecture search via lamarckian evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In JMLR, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thomas Elsken, Benedikt Staffler, Jan Hendrik Metzen, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Meta-learning of neural architectures for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thomas Elsken, Arber Zela, Jan Hendrik Metzen, Benedikt Staffler, Thomas Brox, Abhi- nav Valada, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search for dense prediction tasks in computer vision, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nick Erickson, Jonas Mueller, Alexander Shirkov, Hang Zhang, Pedro Larroy, Mu Li, and Alexander Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autogluon-tabular: Robust and accurate automl for structured data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06505, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stefan Falkner, Aaron Klein, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bohb: Robust and efficient hyperparameter optimization at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jiemin Fang, Yuzhu Sun, Kangjian Peng, Qian Zhang, Yuan Li, Wenyu Liu, and Xing- gang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fast neural network adaptation via parameter remapping and architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Feurer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Klein, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Eggensperger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springenberg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Blum, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient and robust automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 2962–2970, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Matthias Feurer and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019), pages 3–38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chelsea Finn, Pieter Abbeel, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Model-agnostic meta-learning for fast adaptation of deep networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dario Floreano, Peter D¨urr, and Claudio Mattiussi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neuroevolution: from architectures to learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolutionary intelligence, 1(1):47–62, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Peter I Frazier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A tutorial on bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' stat, 1050:8, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 45 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autogan-distiller: searching to compress generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 3292–3303, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Saya Fujino, Naoki Mori, and Keinosuke Matsumoto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep convolutional networks for human sketches by means of the evolutionary deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2017 Joint 17th World Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems (IFSA-SCIS), pages 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vayangi Vishmi Vishara Ganepola and Torin Wirasingha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automating generative adversar- ial networks using neural architecture search: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), pages 577–582.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chen Gao, Yunpeng Chen, Si Liu, Zhenxiong Tan, and Shuicheng Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Adversarialnas: Ad- versarial neural architecture search for gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5680–5689, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yang Gao, Hong Yang, Peng Zhang, Chuan Zhou, and Yue Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Graph neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), volume 20, pages 1403–1409, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Roman Garnett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bayesian Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cambridge University Press, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' to appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' John S Garofolo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Timit acoustic phonetic continuous speech corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Linguistic Data Con- sortium, 1993, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-fpn: Learning scalable feature pyramid architecture for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Spencer Gibb, Hung Manh La, and Sushil Louis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A genetic algorithm for convolutional network structure optimization for concrete crack detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2018 IEEE Congress on Evolutionary Computation (CEC), pages 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Girshick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Donahue, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Darrell, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rich feature hierarchies for accurate object detection and semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 580–587, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David E Goldberg and Kalyanmoy Deb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A comparative analysis of selection schemes used in genetic algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Foundations of genetic algorithms, volume 1, pages 69–93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Elsevier, 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Vikas Chandra, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Learning Rep- resentations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autogan: Neural archi- tecture search for generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3224–3234, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 46 Neural Architecture Search: Insights from 1000 Papers Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Li Guilin, Zhang Xing, Wang Zitong, Li Zhenguo, and Zhang Tong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stacnas: Towards stable and consistent optimization for differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Openreview submission https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='id=rygpAnEKDH, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Improved training of wasserstein gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Confer- ence on Neural Information Processing Systems (NeurIPS), 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhaohui Yang, Han Wu, Xinghao Chen, and Chang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hit-detector: Hierarchical trinity architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Single path one-shot neural architecture search with uniform sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 544–560.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David Ha, Andrew Dai, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep speech: Scaling up end-to-end speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='5567, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Spatial pyramid pooling in deep convolutional networks for visual recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9):1904–1916, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xin He, Kaiyong Zhao, and Xiaowen Chu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automl: A survey of the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Knowledge-Based Systems, 212:106622, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Philipp Hennig and Christian J Schuler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Entropy search for information-efficient global optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Journal of Machine Learning Research, 13(Jun):1809–1837, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jos´e Miguel Hern´andez-Lobato, Matthew W Hoffman, and Zoubin Ghahramani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Predictive entropy search for efficient global optimization of black-box functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 918–926, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 47 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gans trained by a two time-scale update rule converge to a local nash equilib- rium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sepp Hochreiter and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural computation, 9(8):1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sepp Hochreiter, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Steven Younger, and Peter R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Conwell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning to learn using gra- dient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, Artificial Neural Networks — ICANN 2001, pages 87–94, Berlin, Heidelberg, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer Berlin Heidelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Noah Hollmann, Samuel M¨uller, Katharina Eggensperger, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tabpfn: A transformer that solves small tabular classification problems in a second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01848, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hospedales, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Antoniou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Micaelli, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Storkey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Meta-learning in neural networks: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, To- bias Weyand, Marco Andreetto, and Hartwig Adam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mobilenets: Efficient convolutional neural networks for mobile vision applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='04861, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, and De- badeepta Dey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient forward architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Con- ference on Neural Information Processing Systems (NeurIPS), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shou-Yong Hu, Sirui Xie, Hehui Zheng, Chunxiao Liu, Jianping Shi, Xunying Liu, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dsnas: Direct neural architecture search without parameter retraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12081–12089, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Weinberger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Densely connected convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Frank Hutter, Holger H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hoos, and Kevin Leyton-Brown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sequential model-based optimiza- tion for general algorithm configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the 5th International Confer- ence on Learning and Intelligent Optimization, LION’05, page 507–523, Berlin, Heidel- berg, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer-Verlag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ISBN 9783642255656.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1007/978-3-642-25566-3 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1007/978-3-642-25566-3_40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, editors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automated Machine Learn- ing: Methods, Systems, Challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Carl Hvarfner, Frank Hutter, and Luigi Nardi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Joint entropy search for maximally-informed bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 48 Neural Architecture Search: Insights from 1000 Papers Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns, Guilherme Miotto, Simon Schrodi, Andr´e Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bag of baselines for multi-objective joint neural architecture search and hyperparameter opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arthur Jacot, Franck Gabriel, and Cl´ement Hongler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural tangent kernel: Convergence and generalization in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kevin Jamieson and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Non-stochastic best arm identification and hyper- parameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mojan Javaheripi, Shital Shah, Subhabrata Mukherjee, Tomasz Lukasz Religa, Caio Ce- sar Teodoro Mendes, Gustavo Henrique de Rosa, Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Litetransformersearch: Training-free on-device search for efficient autoregressive language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural In- formation Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shengli Jiang and Prasanna Balaprakash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Graph neural network architecture search for molecular property prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2020 IEEE International Conference on Big Data (Big Data), pages 1346–1353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Haifeng Jin, Qingquan Song, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-keras: An efficient neural architecture search system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Haifeng Jin, Qingquan Song, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-keras: An efficient neural architecture search system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1946–1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ACM, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Donald R Jones, Matthias Schonlau, and William J Welch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient global optimization of expensive black-box functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Journal of Global optimization, 13(4):455–492, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arlind Kadra, Marius Lindauer, Frank Hutter, and Josif Grabocka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Regularization is all you need: Simple neural nets can excel on tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11189, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, and Barnab´as P´oczos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multi- fidelity Bayesian optimisation with continuous approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, and Eric P Xing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search with bayesian optimisation and optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David Jacob Kedziora, Katarzyna Musial, and Bogdan Gabrys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autonoml: Towards an integrated framework for autonomous machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='12600, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 49 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hiroaki Kitano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Designing neural networks using genetic algorithms with graph generation system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Complex systems, 4(4):461–476, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jyrki Kivinen and Manfred K Warmuth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Exponentiated gradient versus gradient descent for linear predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' information and computation, 132, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning curve prediction with bayesian neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Aaron Klein, Louis Tiao, Thibaut Lienart, Cedric Archambeau, and Matthias Seeger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Model-based asynchronous hyperparameter and neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='10865, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nikita Klyuchnikov, Ilya Trofimov, Ekaterina Artemova, Mikhail Salnikov, Maxim Fedorov, Alexander Filippov, and Evgeny Burnaev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-nlp: neural architecture search benchmark for natural language processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Access, 10:45736–45747, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Masayuki Kobayashi and Tomoharu Nagao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A multi-objective architecture search for gen- erative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the 2020 Genetic and Evolutionary Com- putation Conference Companion, pages 133–134, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bernard Koch, Emily Denton, Alex Hanna, and Jacob G Foster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Reduced, reused and recycled: The life of a dataset in machine learning research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01716.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-suite-zero: Accelerating research on zero cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Imagenet classification with deep convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural In- formation Processing Systems (NeurIPS), 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and Aaron Courville.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bayesian hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='04759, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deepika Kumari and Kamaljit Kaur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A survey on stereo matching techniques for 3d vision in image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Eng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Manuf, 4:40–49, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kevin Alexander Laube, Maximus Mutschler, and Andreas Zell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' What to expect of hardware metric predictors in NAS, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='id=2DJn3E7lXu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yann LeCun, Patrick Haffner, L´eon Bottou, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Object recognition with gradient-based learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Shape, contour and grouping in computer vision, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hayeon Lee, Eunyoung Hyung, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rapid neural architecture search by learning to generate graphs from datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 50 Neural Architecture Search: Insights from 1000 Papers Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Set transformer: A framework for attention-based permutation-invariant neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Snip: Single-shot network prun- ing based on connection sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and Xiaojun Chang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bossnas: Exploring hybrid cnn-transformers with block-wisely self- supervised neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12281–12291, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chaojian Li, Zhongzhi Yu, Yonggan Fu, Yongan Zhang, Yang Zhao, Haoran You, Qixuan Yu, Yue Wang, Cong Hao, and Yingyan Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' {HW}-{nas}-bench: Hardware-aware neu- ral architecture search benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Guohao Li, Guocheng Qian, Itzel C Delgadillo, Matthias Muller, Ali Thabet, and Bernard Ghanem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sgas: Sequential greedy architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1620–1630, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jian Li, Yong Liu, Jiankun Liu, and Weiping Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture optimization with graph vae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='10310, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liam Li and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Random search and reproducibility for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Uncertainty in Artificial Intelligence (UAI), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A system for massively parallel hyperparameter tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Conference on Machine Learning Systems (MLSys), 2020c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Liam Li, Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Geometry-aware gradient algorithms for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hyperband: A novel bandit-based approach to hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In JMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuhong Li, Cong Hao, Pan Li, Jinjun Xiong, and Deming Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Generic neural architec- ture search via regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34:20476–20490, 2021d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, and Shenghua Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Towards fast adaptation of neural architectures with meta learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 51 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hanwen Liang, Shifeng Zhang, Jiacheng Sun, Xingqiu He, Weiran Huang, Kechen Zhuang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Darts+: Improved differentiable architecture search with early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06035, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong Jin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zen-nas: A zero-shot nas for high-performance image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 347–356, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Marius Lindauer and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Best practices for scientific research on neural archi- tecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In JMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Marius Lindauer, Katharina Eggensperger, Matthias Feurer, Andr´e Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, Ren´e Sass, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Smac3: A versa- tile bayesian optimization package for hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Journal of Machine Learning Research, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei- Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Progressive neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuille, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hierarchical representations for efficient architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hanxiao Liu, Karen Simonyan, and Yiming Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Darts: Differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Roberta: A robustly optimized bert pretraining approach, 2019d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuqiao Liu, Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Kay Chen Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A survey on evolutionary neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Neural Networks and Learning Systems, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Swin transformer: Hierarchical vision transformer using shifted windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 52 Neural Architecture Search: Insights from 1000 Papers In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mohammad Loni, Sima Sinaei, Ali Zoljodi, Masoud Daneshtalab, and Mikael Sj¨odin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep- maker: A multi-objective optimization framework for deep neural networks in embedded systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Microprocessors and Microsystems, 73:102989, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhichao Lu, Ian Whalen, Vishnu Boddeti, Yashesh Dhebar, Kalyanmoy Deb, Erik Good- man, and Wolfgang Banzhaf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nsga-net: Neural architecture search using multi-objective genetic algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Genetic and Evolutionary Computation Confer- ence (GECCO), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zhichao Lu, Kalyanmoy Deb, Erik Goodman, Wolfgang Banzhaf, and Vishnu Naresh Bod- deti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nsganetv2: Evolutionary multi-objective surrogate-assisted neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Computer Vision – ECCV 2020, pages 35–51, Cham, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer Inter- national Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jovita Lukasik, David Friede, Arber Zela, Frank Hutter, and Margret Keuper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Smooth variational graph embeddings for efficient neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Joint Conference on Neural Networks (IJCNN), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jovita Lukasik, Steffen Jung, and Margret Keuper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning where to look–generative nas is surprisingly efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The European Conference on Computer Vision (ECCV), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jelena Luketina, Mathias Berglund, Klaus Greff, and Tapani Raiko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Scalable gradient-based tuning of continuous regularization hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 2952–2960, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Renqian Luo, Xu Tan, Rui Wang, Tao Qin, Enhong Chen, and Tie-Yan Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Semi-supervised neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sebastian Lutz, Konstantinos Amplianitis, and Aljoscha Smolic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alphagan: Generative ad- versarial networks for natural image matting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The British Machine Vision Conference (BMVC), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lizheng Ma, Jiaxu Cui, and Bo Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep neural architecture search with deep graph bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 2019 IEEE/WIC/ACM International Conference on Web In- telligence (WI), pages 500–507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Matthew Mackay, Paul Vicol, Jonathan Lorraine, David Duvenaud, and Roger Grosse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Self- tuning networks: Bilevel optimization of hyperparameters using structured best-response functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neeratyoy Mallik and Noor Awad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dehb: Evolutionary hyperband for scalable, robust and efficient hyperparameter optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The International Joint Conference on Artificial Intelligence (IJCAI), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 53 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Abhinav Mehrotra, Alberto Gil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ramos, Sourav Bhattacharya, �Lukasz Dudziak, Ravichander Vipperla, Thomas Chau, Mohamed S Abdelfattah, Samin Ishtiaq, and Nicholas Donald Lane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-asr: Reproducible neural architecture search for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri Zabergja, Shakiba Mora- dian, Mahmoud Safari, Kaicheng Yu, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-suite: Nas evaluation is (now) surprisingly easy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Joe Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search without training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 7588–7598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' H Mendoza, A Klein, M Feurer, J Springenberg, and F Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Towards automatically- tuned neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In ICML 2016 AutoML Workshop, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Unrolled generative adver- sarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representa- tions (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural Network Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='com/microsoft/nni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Distributed representations of words and phrases and their compositionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Geoffrey F Miller, Peter M Todd, and Shailesh U Hegde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Designing neural networks using genetic algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In ICGA, volume 89, pages 379–384, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rusu, Joel Veness, Marc G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bellemare, Alex Graves, Martin Riedmiller, Andreas K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Ku- maran, Daan Wierstra, Shane Legg, and Demis Hassabis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Human-level control through deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nature, 518(7540):529–533, Feb 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jonas Moˇckus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On bayesian methods for seeking the extremum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Optimization Techniques IFIP Technical Conference, pages 400–404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 1975.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pablo Mu˜noz, Nikolay Lyalyushkin, Yash Akhauri, Anastasia Senina, Alexander Kozlov, and Nilesh Jain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Enabling NAS with automated super-network generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' AAAI 1st International Workshop on Practical Deep Learning in the Wild, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Byunggook Na, Jisoo Mok, Hyeokjun Choe, and Sungroh Yoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Accelerating neural archi- tecture search via proxy data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The International Joint Conference on Artificial Intelli- gence (IJCAI), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 54 Neural Architecture Search: Insights from 1000 Papers Ashwin Raaghav Narayanan, Arber Zela, Tonmoy Saikia, Thomas Brox, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multi-headed neural ensemble search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Workshop on Uncertainty and Robustness in Deep Learning (UDL@ICML‘21), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Aviv Navon, Aviv Shamsian, Gal Chechik, and Ethan Fetaya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning the pareto front with hypernetworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Repre- sentations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, and Lihi Zelnik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xnas: Neural architecture search with expert advice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Renato Negrinho and Geoff Gordon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deeparchitect: Automatically designing and training deep architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' stat, 1050:28, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vladimir Nekrasov, Hao Chen, Chunhua Shen, and Ian Reid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fast neural architecture search of compact semantic segmentation models via auxiliary cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vu Nguyen, Tam Le, Makoto Yamada, and Michael A Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Optimal transport kernels for sequential and parallel neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 8084–8095.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alex Nichol, Joshua Achiam, and John Schulman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On first-order meta-learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, and Huazhong Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A generic graph- based neural architecture encoding scheme for predictor-based nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In European Confer- ence on Computer Vision, pages 189–204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, and Yu Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evaluating efficient performance estimators of neural architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pro- ceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Matheus Nunes and Gisele L Pappa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search in graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Brazilian Conference on Intelligent Systems, pages 302–317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Olson, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bartley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Urbanowicz, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Moore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Friedrich, editor, Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’16), pages 485–492.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ACM, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' T Den Ottelander, Arkadiy Dushatskiy, Marco Virgolin, and Peter AN Bosman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Local search is a remarkably strong baseline for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Evolutionary Multi-Criterion Optimization, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Daiyi Peng, Xuanyi Dong, Esteban Real, Mingxing Tan, Yifeng Lu, Gabriel Bender, Hanx- iao Liu, Adam Kraft, Chen Liang, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pyglove: Symbolic programming for 55 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Infor- mation Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient neural archi- tecture search via parameters sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alo¨ıs Pourchot, Alexis Ducarouge, and Olivier Sigaud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' To share or not to share: A com- prehensive appraisal of weight-sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='04289, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vishak Prasad, Colin White, Paarth Jain, Sibasis Nayak, Rishabh Iyer, and Ganesh Ramakrishnan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Speeding up NAS with adaptive subset selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01454, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Language models are unsupervised multitask learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' OpenAI blog, 1(8):9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, and Piotr Dollar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Designing network design spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Exploring the limits of transfer learning with a unified text-to-text transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 21(140), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ai and the everything in the whole wide world benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Aditya Rawal, Joel Lehman, Felipe Petroski Such, Jeff Clune, and Kenneth O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Stanley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Synthetic petri dish: A novel surrogate model for rapid architecture search, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le, and Alexey Kurakin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Large-scale evolution of image classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Regularized evolution for image classifier architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Esteban Real, Chen Liang, David So, and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automl-zero: Evolving machine learning algorithms from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 8007–8019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, and Xin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A comprehensive survey of neural architecture search: Challenges and solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='02903, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 56 Neural Architecture Search: Insights from 1000 Papers Nicholas Roberts, Mikhail Khodak, Tri Dao, Liam Li, Christopher R´e, and Ameet Tal- walkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rethinking neural operations for diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Olaf Ronneberger, Philipp Fischer, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' U-net: Convolutional networks for biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Nassir Navab, Joachim Hornegger, William M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wells, and Alejandro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Frangi, editors, Medical Image Computing and Computer-Assisted In- tervention – MICCAI 2015, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Binxin Ru, Clare Lyle, Lisa Schut, Mark van der Wilk, and Yarin Gal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Revisiting the train loss: an efficient performance estimator for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' stat, 1050:8, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Binxin Ru, Xingchen Wan, Xiaowen Dong, and Michael Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search using bayesian optimisation with weisfeiler-lehman kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Robin Ru, Pedro Esperan¸ca, and Fabio Maria Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture generator optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Michael Ruchte, Arber Zela, Julien Siems, Josif Grabocka, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Naslib: a modular and flexible neural architecture search library, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tonmoy Saikia, Yassine Marrakchi, Arber Zela, Frank Hutter, and Thomas Brox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autodisp- net: Improving disparity estimation with automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE International Conference on Computer Vision (ICCV), October 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Improved techniques for training gans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mobilenetv2: Inverted residuals and linear bottlenecks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 4510–4520, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Santanu Santra, Jun-Wei Hsieh, and Chi-Fang Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gradient descent effects on differential neural architecture search: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Access, 9:89602–89618, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shreyas Saxena and Jakob Verbeek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Convolutional neural fabrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jurgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolutionary principles in self-referential learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' on learning how to learn: The meta-meta-meta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='-hook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Master’s thesis, Technische Universitaet Muenchen, Germany, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning to control fast-weight memories: An alternative to dynamic recurrent networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural Computation, 4(1):131–139, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A ‘self-referential’weight matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International conference on arti- ficial neural networks, pages 446–450.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 57 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Lennart Schneider, Florian Pfisterer, Martin Binder, and Bernd Bischl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mutation is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In 8th ICML Workshop on Automated Machine Learning (AutoML), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Christoph Schorn, Thomas Elsken, Sebastian Vogel, Armin Runge, Andre Guntoro, and Gerd Ascheid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automated design of error-resilient and hardware-efficient deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Springer Neural Computing and Applications, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ArXiv, abs/1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Christian Sciuto, Kaicheng Yu, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Eval- uating the search phase of neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gresa Shala, Thomas Elsken, Frank Hutter, and Josif Grabocka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Transfer NAS with meta- learned bayesian surrogates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Sixth Workshop on Meta-Learning at the Conference on Neural Information Processing Systems, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Albert Shaw, Daniel Hunter, Forrest Landola, and Sammy Sidhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Squeezenas: Fast neural architecture search for faster semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE International Confer- ence on Computer Vision (ICCV) Workshops, Oct 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Junhong Shen, Mikhail Khodak, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient architecture search for diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yu Shen, Yang Li, Jian Zheng, Wentao Zhang, Peng Yao, Jixiang Li, Sen Yang, Ji Liu, and Cui Bin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proxybo: Accelerating neural architecture search via bayesian optimization with zero-cost proxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='10423, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bridging the gap between sample-based and one-shot neural architecture search with bonas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jae-hun Shim, Kyeongbo Kong, and Suk-Ju Kang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Core-set sampling for efficient neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06869, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi, and Bryan Kian Hsiang Low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nasi: Label-and data-agnostic neural architecture search at initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yao Shu, Yizhou Chen, Zhongxiang Dai, and Bryan Low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural ensemble search via bayesian sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Uncertainty in Artificial Intelligence (UAI), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper, and Frank Hut- ter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-301 and the case for surrogate benchmarks for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='09777, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 58 Neural Architecture Search: Insights from 1000 Papers David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mastering the game of go with deep neural networks and tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nature, 529(7587):484–489, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Master- ing the game of go without human knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nature, 550(7676):354–359, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David So, Quoc Le, and Chen Liang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' The evolved transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' David R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' So, Wojciech Ma´nke, Hanxiao Liu, Zihang Dai, Noam Shazeer, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Primer: Searching for efficient transformers for language modeling, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Gold- stein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Saint: Improved neural networks for tabular data via row attention and contrastive pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01342, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, and Yunhe Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficient resid- ual dense block search for image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 34, pages 12007–12014, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bayesian opti- mization with robust bayesian neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 4134–4142, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Niranjan Srinivas, Andreas Krause, Sham Kakade, and Matthias Seeger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Gaussian process optimization in the bandit setting: No regret and experimental design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the 27th International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Omnipress, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kenneth O Stanley and Risto Miikkulainen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolving neural networks through augmenting topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolutionary computation, 10(2):99–127, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kenneth O Stanley, David B D’Ambrosio, and Jason Gauci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A hypercube-based encoding for evolving large-scale neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Artificial life, 15(2):185–212, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rainer Storn and Kenneth Price.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' of Global Optimization, 11(4):341–359, dec 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, and Chang Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Vitas: Vision transformer architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='13700, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth Stanley, and Jeffrey Clune.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Ma- chine Learning (ICML), pages 9206–9216.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 59 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Masanori Suganuma, Shinichi Shirakawa, and Tomoharu Nagao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A genetic programming approach to designing convolutional neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the genetic and evolutionary computation conference, pages 497–504, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Masanori Suganuma, Mete Ozay, and Takayuki Okatani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Exploiting the potential of stan- dard convolutional autoencoders for image restoration by evolutionary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Pro- ceedings of the International Conference on Machine Learning (ICML), pages 4771–4780.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rhea Sukthanker, Samuel Dooley, John P Dickerson, Colin White, Frank Hutter, and Micah Goldblum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On the importance of architectures and hyperparameters for fairness in face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='09943, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yanan Sun, Bing Xue, Mengjie Zhang, and Gary G Yen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Evolving deep convolutional neural networks for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Evolutionary Computation, 24 (2):394–407, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yanan Sun, Bing Xue, Mengjie Zhang, Gary G Yen, and Jiancheng Lv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automatically designing cnn architectures using the genetic algorithm for image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE transactions on cybernetics, 50(9):3840–3854, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, and Michael A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Osborne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Raiders of the lost architecture: Kernels for bayesian optimization in conditional param- eter spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='4011, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Inception-v4, inception-resnet and the impact of residual connections on learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Thirty-first AAAI conference on artificial intelligence, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mingxing Tan and Quoc Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Efficientnet: Rethinking model scaling for convolutional neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 6105–6114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mnasnet: Platform-aware neural architecture search for mobile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pruning neural networks without any data by iteratively conserving synaptic flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 33:6377–6389, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Manoel Tenorio and Wei-Tsih Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Self organizing neural networks for the identification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Sys- tems (NeurIPS), 1, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lucas Theis, Iryna Korshunova, Alykhan Tejani, and Ferenc Husz´ar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Faster gaze prediction with dense networks and fisher pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='05787, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 60 Neural Architecture Search: Insights from 1000 Papers C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Thornton, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hutter, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hoos, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Leyton-Brown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dhillon, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Koren, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ghani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Senator, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bradley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Parekh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' He, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Grossman, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Uthurusamy, editors, The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’13), pages 847–855, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sebastian Thrun and Lorien Pratt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Springer Science+Business Media, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuan Tian, Qin Wang, Zhiwu Huang, Wen Li, Dengxin Dai, Minghao Yang, Jun Wang, and Olga Fink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Off-policy reinforcement learning for efficient and effective gan architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 175–192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv´e J´egou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Training data-efficient image transformers & distillation through at- tention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 10347–10357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, and Ameet Talwalkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' NAS-bench-360: Benchmarking neural architecture search on diverse tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Renbo Tu, Nicholas Roberts, Vishak Prasad, Sibasis Nayak, Paarth Jain, Frederic Sala, Ganesh Ramakrishnan, Ameet Talwalkar, Willie Neiswanger, and Colin White.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automl for climate change: A call to action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='03324, 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Joaquin Vanschoren.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Meta-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Hutter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' (2019), pages 39–68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, �Lukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), pages 5998–6008, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xingchen Wan, Binxin Ru, Pedro M Esparan¸ca, and Fabio Maria Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Approximate neural architecture search via operation distribution learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2377–2386, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Xingchen Wan, Binxin Ru, Pedro M Esperan¸ca, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' On redundancy and diversity in cell-based neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chaoqi Wang, Guodong Zhang, and Roger Grosse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Picking winning tickets before training by preserving gradient flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hanchao Wang and Jun Huan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Agan: Towards automated design of generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11080, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 61 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Linnan Wang, Yiyang Zhao, Yuu Jinnai, and Rodrigo Fonseca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Alphax: exploring neural architectures with deep neural networks and monte carlo tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='07440, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Linnan Wang, Yiyang Zhao, Yuu Jinnai, Yuandong Tian, and Rodrigo Fonseca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search using deep neural networks and monte carlo tree search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, number 06, pages 9983– 9991, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ning Wang, Yang Gao, Hao Chen, Peng Wang, Zhi Tian, Chunhua Shen, and Yanning Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-fcos: Fast neural architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ruochen Wang, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho-Jui Hsieh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Rethinking architecture selection in differentiable nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zi Wang and Stefanie Jegelka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Max-value entropy search for efficient bayesian optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pages 3627– 3635.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tao Wei, Changhu Wang, Yong Rui, and Chang Wen Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Network morphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Pro- ceedings of the International Conference on Machine Learning (ICML), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lilian Weng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL https://lilianweng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='io/ posts/2020-08-06-nas/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin White, Willie Neiswanger, Sam Nolen, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A study on encodings for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Informa- tion Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin White, Willie Neiswanger, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bananas: Bayesian optimization with neural architectures for neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin White, Sam Nolen, and Yash Savani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Exploring the loss landscape in neural archi- tecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Uncertainty in Artificial Intelligence (UAI), pages 654–664.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin White, Arber Zela, Binxin Ru, Yang Liu, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' How powerful are performance predictors in neural architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2021c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Colin White, Mikhail Khodak, Renbo Tu, Shital Shah, S´ebastien Bubeck, and Dey De- badeepta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A deeper look at zero-cost proxies for lightweight nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In ICLR Blog Track, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL http://0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='0:4000/2021/12/01/zero-cost-proxies/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ronald J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Williams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Simple statistical gradient-following algorithms for connectionist rein- forcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=', 8(3–4):229–256, may 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 62 Neural Architecture Search: Insights from 1000 Papers Martin Wistuba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Finding competitive network architectures within a day using uct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceed- ings of the 5th IEEE International Conference on Data Science and Advanced Analytics, pages 263-272, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='07420.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Martin Wistuba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep learning architecture search by neuro-cell-based evolution with function-preserving mutations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Michele Berlingerio, Francesco Bonchi, Thomas G¨artner, Neil Hurley, and Georgiana Ifrim, editors, Machine Learning and Knowledge Discovery in Databases, pages 243–258, Cham, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Springer International Publishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Martin Wistuba, Ambrish Rawat, and Tejaswini Pedapati.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' A survey on neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01392, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Catherine Wong, Neil Houlsby, Yifeng Lu, and Andrea Gesmundo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Transfer learning with neural automl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Larochelle, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Grauman, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cesa-Bianchi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Garnett, editors, Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Fbnet: Hardware-aware efficient con- vnet design via differentiable neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10734–10742, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yan Wu, Zhiwu Huang, Suryansh Kumar, Rhea Sanjay Sukthanker, Radu Timofte, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Trilevel neural architecture search for efficient single image super-resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06658, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lichuan Xiang, �Lukasz Dudziak, Mohamed S Abdelfattah, Thomas Chau, Nicholas D Lane, and Hongkai Wen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Zero-cost proxies meet differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06799, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lingxi Xie and Alan Yuille.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Genetic cnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE international confer- ence on computer vision, pages 1379–1388, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lingxi Xie, Xin Chen, Kaifeng Bi, Longhui Wei, Yuhui Xu, Lanfei Wang, Zhengsu Chen, An Xiao, Jianlong Chang, Xiaopeng Zhang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Weight-sharing neural architecture search: A battle to shrink the optimization gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' ACM Computing Surveys (CSUR), 54 (9):1–37, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Snas: stochastic neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 63 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-fpn: Automatic network architecture adaptation for object detection beyond classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In The IEEE International Conference on Computer Vision (ICCV), October 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jin Xu, Xu Tan, Renqian Luo, Kaitao Song, Jian Li, Tao Qin, and Tie-Yan Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas- bert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Aug 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='1145/3447548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3467262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' URL http://dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 1145/3447548.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='3467262.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jin Xu, Xu Tan, Kaitao Song, Renqian Luo, Yichong Leng, Tao Qin, Tie-Yan Liu, and Jian Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Analyzing and mitigating interference in neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Knas: green neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 11613–11625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, and Hongkai Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Pc-darts: Partial channel connections for memory-efficient architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, and Mi Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Does unsupervised architecture representation learning help neural architecture search?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shen Yan, Kaiqiang Song, Fei Liu, and Mi Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Cate: Computation-aware neural archi- tecture encoding with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Shen Yan, Colin White, Yash Savani, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-x11 and the power of learning curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Process- ing Systems (NeurIPS), 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Antoine Yang, Pedro M Esperan¸ca, and Fabio M Carlucci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas evaluation is frustrat- ingly hard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sm-nas: Structural- to-modular neural architecture search for object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Quanming Yao, Mengshuo Wang, Yuqiang Chen, Wenyuan Dai, Yu-Feng Li, Wei-Wei Tu, Qiang Yang, and Yang Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Taking human out of learning applications: A survey on automated machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='13306, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Autotinybert: Automatic hyper-parameter optimization for efficient pre-trained language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In ACL, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 64 Neural Architecture Search: Insights from 1000 Papers Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hut- ter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-101: Towards reproducible neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaicheng Yu, Rene Ranftl, and Mathieu Salzmann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' How to train your super-net: An analysis of training heuristics in weight-sharing nas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='04276, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Tong Yu and Hong Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Hyper-parameter optimization: A review of algorithms and appli- cations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='05689, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sergey Zagoruyko and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Wide residual networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In British Machine Vision Conference, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhut- dinov, and Alexander J Smola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deep sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Sheheryar Zaidi, Arber Zela, Thomas Elsken, Chris C Holmes, Frank Hutter, and Yee Teh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural ensemble search for uncertainty estimation and dataset shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 34:7898–7911, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Taskonomy: Disentangling task transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3712–3722, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arber Zela, Aaron Klein, Stefan Falkner, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Towards automated deep learning: Efficient joint neural architecture and hyperparameter search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='06906, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Understanding and robustifying differentiable architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Arber Zela, Julien Siems, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Nas-bench-1shot1: Benchmarking and dissect- ing one-shot neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Chris Zhang, Mengye Ren, and Raquel Urtasun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Graph hypernetworks for neural architec- ture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Haokui Zhang, Ying Li, Hao Chen, and Chunhua Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Memory-efficient hierarchical neural architecture search for image denoising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3657–3666, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Miao Zhang, Steven W Su, Shirui Pan, Xiaojun Chang, Ehsan M Abbasnejad, and Reza Haffari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' idarts: Differentiable architecture search with stochastic implicit gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 12557–12566.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' PMLR, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 65 White, Safari, Sukthanker, Ru, Elsken, Zela, Dey and Hutter Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' D-vae: A vari- ational autoencoder for directed acyclic graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the Annual Conference on Neural Information Processing Systems (NeurIPS), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen Zhang, and Yaming Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Deeper insights into weight sharing in neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='01431, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Ziwei Zhang, Xin Wang, and Wenwu Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Automated machine learning on graphs: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IJCAI Survey Track, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='00742.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Huan Zhao, Lanning Wei, and Quanming Yao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Simplifying architecture search for graph neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='11652, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yiren Zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Lio, and Mateja Jamnik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Prob- abilistic dual network architecture search on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='09676, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yiyang Zhao, Linnan Wang, Kevin Yang, Tianjun Zhang, Tian Guo, and Yuandong Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Multi-objective optimization by learning space partition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Yuekai Zhao, Li Dong, Yelong Shen, Zhihua Zhang, Furu Wei, and Weizhu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Memory- efficient differentiable transformer architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Findings of the Association for Computational Linguistics, 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi, Xuesen Zhang, and Wanli Ouyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Econas: Finding proxies for economical neural architecture search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11396–11404, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaichen Zhou, Lanqing Hong, Shoukang Hu, Fengwei Zhou, Binxin Ru, Jiashi Feng, and Zhenguo Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Dha: End-to-end joint optimization of data augmentation policy, hyper- parameter and architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='05765, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Kaixiong Zhou, Qingquan Song, Xiao Huang, and Xia Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-gnn: Neural architecture search of graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content='03184, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Lucas Zimmer, Marius Lindauer, and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Auto-pytorch tabular: Multi-fidelity metalearning for efficient and robust autodl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Barret Zoph and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Neural architecture search with reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' Learning transferable architectures for scalable image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' In CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} +page_content=' 66' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/adFAT4oBgHgl3EQf4h70/content/2301.08727v1.pdf'} diff --git a/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss b/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..7022bca9157cf3bcba0fbc4ac9617cc0c3ed52de --- /dev/null +++ b/atE1T4oBgHgl3EQfxAUb/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1be8ba1c5a4dbe787391e06d2f2a8032a41ff9859e914201e7c4142efbbc377 +size 4522029 diff --git a/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e59878d29b3559a618aec579dbc2e3bfc174472e --- /dev/null +++ b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/2301.00889v1.pdf.txt @@ -0,0 +1,1885 @@ +An empirical process framework for covariate +balance in causal inference +Efr´en Cruz Cort´es +Michigan Institute for Data Science +Center for the Study of Complex Systems +University of Michigan +encc@umich.edu +Kevin Josey +Department of Biostatistics +Harvard T.H. Chan School of Public Health +kjosey@hsph.harvard.edu +Fan Yang +Department of Biostatistics and Informatics +Colorado School of Public Health +fan.3.yang@cuanschutz.edu +Debashis Ghosh +Department of Biostatistics and Informatics +Colorado School of Public Health +debashis.ghosh@cuanschutz.edu +Abstract +We propose a new perspective for the evaluation of matching procedures by considering +the complexity of the function class they belong to. Under this perspective we provide +theoretical guarantees on post-matching covariate balance through a finite sample con- +centration inequality. We apply this framework to coarsened exact matching as well as +matching using the propensity score and suggest how to apply it to other algorithms. +Simulation studies are used to evaluate the procedures. +keywords: Causal effects, empirical distribution function, entropy metric, superpopulation, tail +inequality, Vapnik-Chervonenkis dimension. +1 +Introduction +Causal inference is a central goal for outcomes and policy research, particularly in the medical field. +Among the many topics in this broad field of study are methods for evaluating treatment effects +with non-randomized data. There is an abundance of observational data in nearly every discipline of +science. However, bias induced by confounding is inherent in observational studies. In this context, +the researcher must account for every potential confounder in some way before they can establish +causality. While randomization remains the gold-standard for inference, as there is no confounding +by definition, randomizing individuals into treatment groups is often cost prohibitive and sometimes +unethical for certain study designs. +Under the potential outcomes framework (Neyman, 1923; Rubin, 1974), Rosenbaum and Rubin +(1983) were able to describe how the propensity score plays a key role in causal effect estimation and +inference with observational data. The propensity score is defined as the probability of receiving a +treatment given a set of measured covariates. Under strong ignorabiligy assumption, the propensity +score removes bias attributable to confounding due to its property as a balancing score (Rosenbaum +and Rubin, 1983). With this result in mind, numerous methods for causal effect estimation were +1 +arXiv:2301.00889v1 [math.ST] 2 Jan 2023 + +subsequently developed around the propensity score, with covariate balance serving as the primary +objective (e.g., Imai and Ratkovic (2014); Zubizarreta (2015); Chan et al. (2016)). However, the +results presented by Rosenbaum and Rubin (1983) about the propensity score are derived in an +asymptotic setting. This means that estimates of the propensity score may not adequately balance +the covariate distribution in finite settings. +Therefore, many methods are resolved by iterating +between fitting a model for the propensity score and evaluating balance diagnostics on the propensity +score adjusted covariates before estimating the treatment effect of interest. +Some methods for +evaluating balance diagnostics have been proposed by Ho et al. (2007) and Sekhon (2008). The +propensity score literature has mostly diverged into two overlapping yet distinct domains - one +that uses the propensity score to derive balancing weights (Hainmueller, 2012; Imai and Ratkovic, +2014; Chan et al., 2016) and the other that uses a balancing score, such as the propensity score, to +construct a matched cohort. +Recently, a multivariate matching approach using coarsened values of the observed covariates was +developed by Iacus et al. (2011). They refer to their algorithm as coarsened exact matching. One +of the primary aims of their method was to eliminate the iterative step of re-matching participants +until an acceptable amount of balance is achieved. Coarsened exact matching is quite simple in +nature and proceeds using the following high-level heuristic: +1. For each confounding variable, coarsen it into a certain number of categories; +2. Create strata based on the possible combinations of the coarsened values; +3. Compute a causal effect by comparing the outcomes of the treatment groups within the strata +and adjusting for the stratum effect appropriately. +The theoretical justification provided by Iacus et al. (2011) for coarsened exact matching is a +concept they term monotonic imbalance. They show that bounding the distance between confounders +to be small leads to matching procedures that are more flexible than procedures based on the +equal percent bias reduction theory developed by Rubin and collaborators (Rubin, 1976; Rubin and +Thomas, 1992; Rubin et al., 2006). One of the main advantages of coarsened exact matching is that +it becomes amenable to large-scale database querying approaches to peforming causal inference: see +Salimi and Suciu (2016) as well as Wang et al. (2017). +However, fewer technical results exist for matching estimators than for other approaches, such as +inverse probability weighting estimators. Abadie and Imbens (2006) have studied the large-sample +asymptotics of matching estimators and found that in general, matching-based estimators of average +causal effect did not have the usual n1/2 convergence. The intuition is that the matching algorithm +introduces a bias into causal effect estimation that did not vanish asymptotically. This bias term also +increased with the number of confounders. Bias-corrected estimators have been proposed by Abadie +and Imbens (2011). Abadie and Imbens (2016) performed a theoretical study of the asymptotic +behavior of average causal effect estimators that match using the estimated propensity score. +Conceptually, achieving covariate balance is a multivariate concept. If we let L(Z | T = 0) and +L(Z | T = 1) denote the probability laws for the confounders conditional on treatment status then, +ideally, as in the case of perfect randomization, these distributions are equal in some sense. We refer +to this sense of equality as covariate balance. +Most covariate balance methods do not take the joint distribution of confounders into account but +rather seek to match moments of the marginal distributions for the confounders. For example, Imai +and Ratkovic (2014) proposed matching the first and second moments of covariates in their algorithm. +Practically, one-dimensional diagnostics such as mean comparisons of confounders between treatment +groups or Kolmogorov-Smirnov statistics are used to evaluate balance. Wang and Zubizarreta (2019) +have argued that due to the inherent complexity in attempting to achieve multivariate balance, one +should instead strive to achieve approximate balance between confounders. +In this paper, we propose a new theoretical approach to evaluating and understanding covariate +balance. We introduce a distance metric to assess how close two multivariate distributions are from +2 + +each other and define covariate balance as having zero distance. This metric is defined in terms of +the function family the matching procedure belongs to. Subsequent assessment of balance relies on +understanding the behavior of the function classes in question. We demonstrate the following in the +current paper: +1. The use of function classes fits naturally with the use of probability metrics (Zolotarev, 1984) +for comparing probability laws and in this instance, multivariate distributions for confounders +conditional on treatment. +2. Results from empirical process theory (Van Der Vaart and Wellner, 1996; Kosorok, 2007) +can subsequently be used to study the behavior of function classes and to make probabilistic +statements on the rates of convergence of matching procedures under ideal balance. +3. Ideal balance provides a new theoretical out-of-sample justification for the methodology of +Iacus et al. (2011) and can be used for the evaluation of other algorithmic strategies. +Based on the framework, one can view the techniques in this paper as being akin to developing a scal- +able strategy for achieving covariate balance that has relatively low complexity from the viewpoint +described in Section 3. +2 +Background and Preliminaries +2.1 +Data Structures and Causal Estimands +Let the data be represented as (Yi, Ti, Zi), i = 1, . . . , n, a random sample from the triple (Y, T, Z), +where Y denotes the response of interest, T denotes the treatment group, and Z is a p-dimensional +vector of covariates. We assume that T takes values in {0, 1}. +We now briefly review the potential outcomes framework (Rubin, 1974; Holland, 1986). +Let +{Y (0), Y (1)} denote the potential outcomes for all n subjects, and the observed response be related +to the potential outcomes by +Y = (1 − T)Y (0) + TY (1). +In the potential outcomes framework, causal effects are defined as within-individual contrasts based +on the potential outcomes. One popularly used estimand is the average causal effect, defined as +ACE = 1 +n +n +� +i=1 +(Yi(1) − Yi(0)) . +Many assumptions are needed for performing valid causal inference. +These include the con- +sistency assumption, the treatment positivity assumption, and the strongly ignorable treatment +assumption (Rosenbaum and Rubin, 1983), defined as +T ⊥ {Y (0), Y (1)} | Z. +(2.1) +Assumption (2.1) means that treatment assignment is conditionally independent of the set of po- +tential outcomes given the covariates. Treatment positivity refers to 1 > P(T = 1 | Z) > 0 for +all values of Z. Thus, the intuition is that any individual can potentially receive either treatment. +Finally, the consistency assumption ensures that the observed outcome and the potential outcome +under the observed treatment coincide. +As described recently by Imbens and Rubin (2015), causal inference proceeds by modelling the +assignment mechanism using observed covariates. A quantity that naturally arises from this mod- +elling is the propensity score (Rosenbaum and Rubin, 1983), the probability of receiving treatment +given confounders. The propensity score is defined as +e(Z) = P(T = 1 | Z). +3 + +Given the treatment ignorability assumption in (2.1), it also follows by Theorem 3 of Rosenbaum +and Rubin (1983) that treatment is strongly ignorable given the propensity score, i.e. +T ⊥ {Y (0), Y (1)} | e(Z). +Based on these assumptions and definitions, we can formulate causal inference using the following +approach: (a) define an appropriate causal estimand; (b) formulate a propensity score model; (c) +check for covariate balance; (d) if (c) holds, estimate the causal estimand by conditioning on the +propensity scores. We note that steps (b) and (c) tend to be iterative in practice. While the results +in this paper pertain to propensity-matched analyses, they apply to more general matching strategies +as well. +2.2 +Previous results on covariate balance +In terms of covariate balance, a major class of theoretical results come from work on equal percent +bias reduction procedures (Rubin and Thomas, 1992, 1996). Equal percent bias reduction means +that a certain type of covariate matching will reduce bias in all dimensions of Z by the same amount. +Define a matching method to be affinely invariant if the matching procedure is invariant to +affine transformations of the covariates. If Z given T is assumed to have a so-called elliptically +symmetric distribution, then Theorem 3.1. +and Corollaries 3.1. +and 3.2 of Rubin and Thomas +(1992) apply so that any affinely invariant matching method will be equal percent bias reducing. +Examples of elliptically symmetric distributions include the multivariate normal and t distributions. +While elliptical symmetry of the confounders given treatment group is a restrictive assumption, this +was relaxed in more recent work by Rubin et al. (2006). There, they assumed that the conditional +distribution of Z given T is a discriminant mixture of elliptically symmetric distributions. Rubin +et al. (2006) prove that a generalization of equal percent bias reducing holds for this setup as well. +Thus, for equal percent bias reducing methods, we have a guarantee that attempting to increase +balance in one variable will not lead to distortions in balance for other variables. However, the +assumptions needed for equal percent bias reducing to hold seem restrictive in practice. Iacus et al. +(2011) took another approach by focusing on in-sample covariate discrepancies and requiring that +the maximum discrepancy in sample means between treated and control subjects be bounded above +by a constant. They generalize this to arbitrary functions of the data, which they term imbalance +bounding and define monotonic imbalance bounding matching methods to be those in which the +discrepancies between a monotonic function applied to a variable is bounded above by a confounder- +specific term. Thus, one can be more stringent in the balance in variable without impacting the +maximal imbalance across all confounders. +There are many important implications of requiring the monotonic imbalance bounding property. +First, many methods of confounder adjustment, such as nearest-neighbor or caliper matching as +defined in Cochran and Rubin (1973), are not monotonic imbalance bounding because they fix the +number of treated and control observations within strata, while monotonic imbalance bounding +methods imply variable numbers of observations. By contrast, if the caliper matching procedure +were to allow for different calipers for each confounder, then this would be monotonic imbalance +bounding. +Iacus et al. (2011) also show that a key goal in causal effect estimation is to reduce model +dependence (Ho et al., 2007), meaning that there should not be extrapolation of potential outcomes +to regions in the covariate space where there are no observations. +Under some assumptions on +the model for potential outcomes, they show that for monotonic imbalance bounding methods, the +model dependence is upper bounded by terms involving an imbalance parameter. In addition, the +estimation error for average causal effects using monotonic imbalance bounding matching methods +can also be upper bounded by terms involving this parameter. +As a concrete example of a new monotonic imbalance bounding method, Iacus et al. (2011) +propose a coarsened exact matching algorithm for creating strata. It proceeds as follows: +4 + +1. For each variable Zj (j = 1, . . . , p), coarsen it into a function Cj(Zj) which takes on fewer +values than the unique values of Zj; +2. Perform exact matching between treated and control observations using the vector +(C1(Z1), C2(Z2), . . . , Cp(Zp)) . +This effectively creates strata S1, . . . , SJ based on the unique combinations of +(C1(Z1), C2(Z2), . . . , Cp(Zp)) . +3. Discard strata in which there are only observations with T = 0. For strata with only observa- +tions from the T = 1 population, extrapolate the potential outcome Y (0) using the available +controls or discard by restricting the causal effect of interest on the treated units for which +causal effect can be identified without further modelling based assumptions. For strata with +both treated and control observations, compare the outcome between the two populations. +Iacus et al. (2011) have developed very easy-to-use software packages for implementing coarsened +exact matching in R and Stata. They show that the coarsened exact matching approach satisfies +the monotonic imbalance bounding property with respect to a variety of functionals of interest. In +addition, they provide a very intuitive explanation for what coarsened exact matching attempts to +mimic. While classical propensity score approaches attempt to mimic a randomized study, analyses +using coarsened exact matching will mimic randomized block designs, where the blocks are by +definition predictive of the potential outcomes. It is well-known that in this situation, randomized +block designs will yield more efficient estimators (e.g., Box, Hunter and Hunter, 1978). +The other approach that has become of recent interest has been to incorporate covariate balance +as part of the causal effect estimation process. For example, Imai and Ratkovic (2014) propose using +generalized methods of moments for causal effect estimation in which covariate balance is treated +as a constraint in the procedure. Chan et al. (2016) propose the use of calibration estimators for +causal effect estimation in which covariate balance constraints lead to a constrained Lagrangian +dual optimization problem. For these approaches, the authors are able to develop consistency and +asymptotic normality results for the causal effect estimators. +As described in more detail in Section 3.1, we will be using an integral probability metric to +assess covariate balance among the two populations. In Kallus (2020) a similar metric is used. They +define such a metric as the target error to be minimized for obtaining optimal weighting coefficients +when estimating the sample average treatment effect on the treated. +While our approaches are +complementary, there are several notable differences. First, in Kallus (2020), they use their metric +to find weights that correspond to known matching methods. The functions involved in their metric +represent the expected relationship between potential outcomes and covariates. In our case, we take +any matching procedure and given the measure of match, bound it by the probability metric involving +functions representing the matching procedure itself, and provide probability bounds to how good +the matching is. In addition, in Kallus (2020), they assume a fixed population and therefore no +randomness in covariate values, while our concern indeed focuses on the sample distribution of these +covariates. The difference between these two approaches is further explained in Section 2.3. +2.3 +Modes of inference and covariate balance +In looking at the various proposals for accommodating covariate balance, it is useful to reconsider +the ways in which one can perform causal inference. Imbens and Rubin (2015) have a nice overview +on the distinction between finite-population and superpopulation modes for causal inference. The +finite-population mode of causal inference treats the sampled units as the population of interest. The +stochastic nature of the experiment is due solely to the treatment mechanism so that randomness +occurs only with respect to the treatment assignments. If one adopts the finite-sample point of view +5 + +for causal inference, then one can use a randomization-based approach to performing inference for +causal effects. +By contrast, the superpopulation mode of inference considers two sources of variability. The +first is due to the randomness in the treatment assignments, and the second is due to the fact that +the sampling units are a random sample from a superpopulation. +Thus, this approach posits a +superpopulation from which the sampling units come from. +Revisiting the previous work from 2.2, the equal percent bias reduction theory and the work of +Iacus et al. (2011) posit results about covariate balance assuming a finite-population mode for causal +inference. Thus, covariate balance results of these methods will involve subsampling and matching +from the sampling units, and the balance occurs with respect to the matched sample. The concept +of balance we introduce in the next section can accommodate both modes of inference. +3 +Main Results +3.1 +Ideal Balance +In this section, we wish to study covariate balance from the viewpoint of comparing the distributions +L(Z | T = 0) and L(Z | T = 1). To do so, we must determine how this comparison is done. We do +this by first defining probability pseudometrics. +Definition 3.1 (Pseudometric). Let A be the set of probability measures defined on a shared mea- +surable space. A function m : A × A → [0, ∞) is a pseudometric on A if, for all µ, ν, λ ∈ A, the +following conditions are satisfied: +1. m(µ, µ) = 0. +2. m(µ, ν) = m(ν, µ). +3. m(µ, ν) ≤ m(µ, λ) + m(λ, ν). +Note these properties almost make m a metric on A, but notably we do not assume that if the +distance between two elements is zero, then the two elements are the same. For the purpose of this +paper, we will abuse terminology and refer to pseudometrics as metrics. +The class of metrics we will work with in this article is given by +γF(µ, ν) = sup +f∈F +���� +� +fdµ − +� +fdν +���� , +(3.1) +where F is a class of functions. In (3.1), γF(µ, ν) is referred to by Zolotarev (1984) as an example +of a probability metric. In our notation, we drop the dependency of γF on F and write it as γ. We +now define ideal balance as being based on (3.1). +Definition 3.2 (Ideal Balance). Let µ and ν be distributions on the same probability space and m a +pseudometric, then we say µ and ν satisfy Ideal Balance with respect to m if m(µ, ν) = 0. +When µ and ν are the conditional distributions of the covariates given the treatment group, +as in Section 2, ideal balance is a restriction on the population. If these are instead the empirical +distributions of the data, ideal balance is a sample restriction. Matching methods, in a sense, intend +to achieve ideal balance on the matched data for some m. +Note that at this stage, we have only dealt with population distributional laws and have not +described how to estimate or compute these quantities with real data. In practice, we would not +expect ideal balance to hold in observational studies. However, it does serve as a useful benchmark +through which we can study the behavior of various functional constraints. +Here, the function +spaces F in (3.1) play the role of the constraints; more complex function spaces correspond to more +constraints on the joint distributions of Z|T = 1 and Z|T = 0. +6 + +3.2 +A Concentration Inequality Result +Let F be a function space and ∥ · ∥ a norm. The covering number N(ϵ, F, ∥ · ∥) is the minimum +number of ∥ · ∥-balls of radius ϵ needed to cover F, where a ball centered around f ∈ F is the +set {g | ∥f − g∥ ≤ ϵ}. +Intuitively, one can think of the covering number as a measure of the +complexity of the function class F. For a measure µ the norm Lr(µ)-norm, for r ≥ 1, is defined +as ∥f∥r +Lr(µ) = +� +|f|rdµ. Throughout the paper, we will assume F is uniformly bounded. Note that +if µ is any probability measure, and under uniform boundedness, we can endow F with the norm +Lr(µ) without dropping any of its elements. Unless otherwise specified, we assume the range of the +functions in F is [0, 1]. Finally, for a function class F, an envelope function of F is defined as any +function h such that for all f in F, the inequality +|f(x)| ≤ |h(x)| +is satisfied for any x. +Let {Zi}n +i=1 be a sample where each Zi has distribution Q. We denote the empirical distribution +by Qn. The F-indexed empirical process GQ +n is defined as the map taking any f ∈ F to +GQ +n (f) = √n +�� +fdQn − +� +fdQ +� += +1 +√n +n +� +i=1 +� +f(Zi) − +� +fdQ +� +. +Theorem 3.3. Let Q0 +n0 and Q1 +n1 be two empirical distributions of observations sampled from Q0 +and Q1, respectively, and assume ideal balance holds for Q0 and Q1 with respect to γ. Let M be the +collection of probability measures. If there exists constants C and K such that F satisfies +sup +µ∈M +N(ϵ, F, ∥ · ∥Lr(µ)) ≤ +�K +ϵ +�C +, +for every 0 < ϵ < C, then +Pr{γ(Q0 +n0, Q1 +n1) > δ} ≤ +� Dδ +2 +√ +C +�C � +nC/2 +0 +exp(−n0δ2/2) + nC/2 +1 +exp(−n1δ2/2) +� +, +(3.2) +where D is a constant depending on K only. +The proofs of Theorem 3.3 and subsequent results are found in the supplementary material. +Throughout the paper, we will use Bn(δ, D, C) for the bound in Theorem 3.3, where the subscript +n reminds us of the dependence on the sample size. +Remark 3.4. We note that the bound in (3.2) is nonasymptotic and will hold for any sample size. +Remark 3.5. In this framework, the function classes play an important role. Theorem 3.3 gives +a bound in terms of the entropy number of the function class in question. +In particular, low- +complexity functions are favored using this approach. A key technical point is ensuring that the +covering number condition in the theorem is satisfied. To do so, we will primarily use results from +Vapnik-Chervonenkis theory (Chervonenkis and Vapnik, 1971) to determine appropriate covering +numbers. +In most cases the function classes of interest are not real-valued but vector-valued. The following +straightforward results can be used to deal with these cases. +Lemma 3.6. Let {Fi}d +i=1 be a collection of real-valued function spaces and (P i, Qi) satisfy ideal +balance under γFi for each 1 ≤ i ≤ d. Let (Pi, Qi) denote their respective empirical distributions +with implicit sample size dependence. Then +Pr +� d +� +i=1 +γFi(Pi, Qi) > δ +� +≤ +d +� +i=1 +B(δ/d, Di, Ci). +7 + +Now, consider the collection {Fi}d +i=1, where each Fi is a real-valued function space. Define F = +{f = (f1, . . . , fd)T | fi ∈ Fi for all i}. Let πℓ be the ℓth coordinate projection, that is, for a finite +dimensional vector x = (x1, . . . , xd), πℓ(x) = xℓ. Finally, define Fπ = {πℓ ◦ f | f ∈ F, 1 ≤ ℓ ≤ d}. +Note the elements of Fπ are real-valued. The following lemma tells us we can either assume µ and +ν satisfy ideal balance with respect to each of γFi, or that they satisfy ideal balance with respect to +γFπ. +Lemma 3.7. Let F, {Fi}d +i=1, and Fπ be as above, and let µ and ν denote two probability measures. +Then the following are equivalent: +1. µ and ν satisfy ideal balance with respect to γFπ; +2. µ and ν satisfy ideal balance with respect to each γFi, 1 ≤ i ≤ d. +3. maxi γFi(ν, µ) = 0. +The following corollary will be very useful: +Corollary 3.8. Let F and Fπ be as above, and Fi = F∗ for all i. Assume F∗ has polynomial +covering number. Let {X0 +j }n0 +j=1 ∼ Q0 and {X1 +j }n1 +j=1 ∼ Q1, where Q0 and Q1 satisfy ideal balance +with respect to γFπ. Fix f ∗ ∈ F, then +Pr +� +� +� +������ +1 +n0 +n0 +� +j=1 +f ∗(X0 +j ) − 1 +n1 +n1 +� +j=1 +f ∗(X1 +j ) +������ +ℓp +> δ +� +� +� ≤ dB(δ/d1/p, D∗, C∗), +for finite p ≥ 1, and +Pr +� +� +������ +1 +n0 +n0 +� +j=1 +f ∗(X0 +j ) − 1 +n1 +n1 +� +j=1 +f ∗(X1 +j ) +������ +ℓ∞ +> δ +� +� ≤ dB(δ, D∗, C∗), +where D∗, C∗ depend only on F∗. +Definition 3.9 (Vapnik-Chervonenkis Dimension). The Vapnik-Chervonenkis dimension of a func- +tion class F on an ambient set X is the cardinality of the largest subset shattered by F. A function +class F shatters a set S ∈ X if for each possible 0 − 1 labeling of the elements of S there is at least +one function f ∈ F that realizes such labeling. +A key result we will use is an application of Theorem 2.6.7 of Van Der Vaart and Wellner (1996), +which implies that if a function class G has finite Vapnik-Chervonenkis dimension v, then +sup +µ N(ϵ, G, L2(µ)) ≤ +�K +ϵ +�C∗ +, +where C∗ = 2v − 2. +4 +Examples +4.1 +Balance on coarsened function classes +Consider coarsened exact matching as described in Iacus et al. (2011). +Let Z0 = {Z0 +i }n0 +i=1 and +Z1 = {Z1 +j }n1 +j=1 be the control and treatment samples, respectively. In coarsened exact matching +we create a partition of the sample space and match samples which are found in the same element +8 + +of the partition, and discard samples in subsets without samples from the opposite group. We are +interested in the quantity +∆ = +1 +m0 +� +i∈M0 +w0 +i Z0 +i − 1 +m1 +� +j∈M1 +w1 +jZ1 +j , +where mℓ is the number of matched samples for the ℓth group, Mℓ is its index set, and {w0 +i , w1 +j}i∈M0,j∈M1 +are weights. +In the supplementary material we describe how to express this matching procedure as a function +f on the variables Z0 +i and Z1 +j . This allows us to express ∆ in terms of f. We further specify the +function space F for which +∥∆∥ ≤ γF(Q0 +n0, Q1 +n1) +holds for an appropriate norm. Using the properties of F and provided the bound above, we can +derive our results of interest: +Pr(|∆k| ≥ δ) ≤ B(δ, D, C∗), +for a constant C∗ and where ∆k is the kth component of ∆. Similarly, +Pr(∥∆∥ℓp ≥ δ) ≤ dB(δ/d1/p, D, C∗) +and +Pr(∥∆∥ℓ∞ ≥ δ) ≤ dB(δ, D, C∗). +4.2 +Covariate balance on the linear propensity score +As discussed in Section 3, there has been a lot of work on developing matching results based on +linear discriminant analysis. That is, we assume that P(Z | T = ℓ) follows N(µℓ, Σ). Under this +model, the metric for consideration is the logit of the propensity score (see Stuart (2010)). In the +supplementary material we show the distance |logit(e(Z)) − logit(e(Z′)| can be expressed in terms +of the linear discriminant analysis hyperplance vector. Indeed, if p is the dimension of the covariates, +we can create a function space F derived from hyperplanes and with Vapnik-Chervonenkis dimension +p + 1 such that +∆ = +������ +1 +m0 +� +i∈M0 +logit(e(Zi)) − 1 +m1 +� +j∈M1 +logit(e(Zj)) +������ +≤ γF(Q0 +n0, Q1 +n1), +allowing us, using Theorem 3.3, to determine the bound of interest: +Pr{∆ > δ} ≤ B(δ, D, 2p). +4.3 +Covariate balance using kernels +Many authors (Hazlett, 2016; Wong and Chan, 2018; Zhu et al., 2018) have advocated for the use +of kernel methods for matching and evaluating covariate balance. This corresponds to assuming +that F in (3.1) represents a Reproducing Kernel Hilbert space. Further details about these function +spaces can be found in the supplementary material. +To apply Theorem 3.3 to the kernel setting, we will note there exists a version of linear discrimi- +nant analysis from section 4.2 that can be extended to the reproducing Kernel Hilbert Space setting +(Baudat and Anouar, 2000). Let H be a reproductive kernel Hilbert space and ∥ · ∥H the norm +associated to it, then a natural metric to consider for a kernelized matching procedure would be +∆H = +������ +1 +m0 +� +i∈M0 +f(Zi) − 1 +m1 +� +j∈M1 +f(Zj) +������ +H +, +9 + +which represents a functional generalization of ∆ from Section 4.2, and where f ∈ H is an appropriate +function chosen by the user. Then ∆H ≤ γF(Q0 +n0, Q1 +n1), and we can use the previous results with a +few adjustments. We show in the supplementary material that +P(∆H > δ) ≤ B(δ, D, C∗), +where C∗ depends on the smoothness properties of H. +5 +Practical implementation +So far, we have given theoretical results that describe how algorithms under various function classes +behave under the ideal balance assumption. As noted earlier, the ideal balance definition is strict +but permits theoretical characterization of various algorithms. The question then naturally arises +as to how to use the theoretical results from the previous sections in practice. +Note one can view the metric in equation (3.1) as a multivariate balance metric, which differ- +entiates it from many other balance metrics in the literature. Zhu et al. (2018) used (3.1), where +F is a reproducing kernel Hilbert space, as a covariate balance diagnostic. There, they found that +in certain situations, the diagnostic was more sensitive in finding covariate imbalances relative to +univariate diagnostics as well as those based on the prognostic score (Hansen, 2008). +Consider the problem of estimating the average causal effect among the treated. In practice, it +is unlikely that ideal balance will hold for the treatment and control populations. That is to say, +γF +� +Q0, Q1� +̸= 0, unless treatment is randomized. Therefore, we wouldn’t be able to use Theorem 3.3 +in an observational study. However, a slight modification can be done for which the analysis remains +largely the same. +Let w ∈ W ⊂ Rn0 be a weight vector and define +Q0 +w = +1 +� +i:Ti=0 wi +� +i:Ti=0 +wiδXi. +The majority of methods in causal inference have as a goal to find appropriate weights w for which +Q0 +w converges to Q∗ for some distribution Q∗ that indeed satisfies ideal balance with Q1. That is, +for which γF +� +Q∗, Q1� += 0. In order for this modification to be feasible, we just need to modify our +proof of Theorem 3.3 and include the convergence rates of Q0 +w to Q∗, which may change depending +on the problem. Having done so, we continue in a parallel manner. +Let f ∗ ∈ F represent a matching procedure with balance diagnostic +∆ = +���� +� +fdQ0 +w − +� +fdQ1 +n1 +���� , +then, by the definition of γF, +∆ ≤ γF +� +Q0 +w, Q1 +n1 +� +. +Therefore, if we can find weights for which Q0 +w converges to Q∗ and γF(Q∗, Q1) = 0, then we can +bound the probability that ∆ exceeds some threshold δ. +There are many methods for finding w ∈ W, the most straightforward being the inverse proba- +bility of treatment weights, +wi = Ti + e(Zi)(1 − Ti) +1 − e(Zi) +. +Even heavily prescribed matching algorithms that are found throughout the causal inference litera- +ture find some weights w ∈ W as described by Abadie and Imbens (2006). In one-to-one matching +with replacement, let J (i) = {j1(i), j2(i), . . .} be the set of indices of units that are matched with +10 + +the unit i = 1, 2, . . . , n. If there are no ties, then J (i) = j(i). With ties present, which occur fre- +quently especially with exact matching (see coarsened exact matching), J (i) might contain multiple +matched indices. The matching process will allow us to produce weights for every unit by solving +wi = +� +{l:Tl=1} +I[i ∈ J (l)] +#J (l) +for all i ∈ {i : Ti = 0} +where #J (i) denotes the cardinality of J (i). +6 +Simulation Studies +We perform a simulation study to evaluate the distribution of the distances reported in Section 4. We +also examine their downstream consequences for estimating average treatment effects on the treated. +There are two data generating mechanisms that we consider. In addition, we vary the sample size +and the variance of the responses for a total of eight scenarios. We replicate each of these scenarios, +described below, over 1000 iterations. We report the mean and Monte Carlo standard errors of the +three distances (∆) examined in Section 4 (Table 1) along with the kernel density estimates for one +representative scenario (Figure 1). We also evaluate the downstream effects of these ∆ statistics on +the average treatment effect using one-to-one matching methods described by Abadie and Imbens +(2006) implemented in the Matching package (Sekhon, 2008) (Tables 2 and 6). +For i = 1, 2, . . . , n, let Zi1 ∼ N(1, 4), Zi2 ∼ Bin(1, 0.3), Zi3 ∼ N(0, 1), and Zi4 ∼ Bin(1, 0.5) +where Ti denotes the binary treatment assignment. The conditional means of the outcomes for the +treated, µ1(Zi), and the controls, µ0(Zi), are constructed as +µ0(Zi) = 10 − 3Zi1 − Zi2 + Zi3 + 3Zi4 and +µ1(Zi) = µ0(Zi) + 5 + 3Zi1 − Zi2 + Zi3 − 3Zi4. +(6.1) +We sample Ti ∼ Bin(1, 0.5) distribution. For i = 1, 2, . . . , n, we sample the counterfactual responses +Yi(1) ∼ N[µ1(Zi), σ2] and Yi(0) ∼ N[µ0(Zi), σ2]. The observed outcome is Yi = TiYi(1) + (1 − +Ti)Yi(0). We will refer to these conditions with the label “baseline”. For the error variance, we set +σ2 ∈ {5, 10}. +For the scenario labeled “sparse”, we include an additional set of covariates that ultimately do +not affect the outcome. The outcomes are determined by the potential outcome models in (6.1), yet +the methods we consider also account for the noise covariates Zi5 ∼ N(−1, 4), Zi6 ∼ Bin(1, 0.7), +Zi7 ∼ N(0, 1), and Zi8 ∼ Bin(1, 0.5). +As mentioned before, we test the three examples described in Section 4 in their ability to produce +efficient, unbiased estimates of the average treatment effect of the treated. +Linear discriminant +analysis sets f to be the logit transformation of the fitted posterior probability that each unit +receives treatment. The support vector machine examples use the distance that each point is from +the resulting separating hyperplane assuming a linear kernel. Coarsened exact matching is performed +similar to what is described in Iacus et al. (2011) and is implemented with the cem R package. +Table 1 shows the results of our simulation experiment. Since balance is already achieved through +randomization in this simulation, we also report the unmatched, crude estimate of the average +causal effect for references. Here the value ∆ is the maximum absolute sample mean difference for +the unweighted covariates. +The values ∆ are not necessarily directly comparable in this example. They do represent the +distributions whose tail probabilities we are bounding in theorem. The simulation serves to char- +acterize some of the densities of these statistics so that we might better understand which values +of δ are acceptable for the different balance methods in Section 4. We see that the values for ∆ +after coarsened exact matching were the most heavily concentrated, followed closely by the values +11 + +n +σ2 +Scenario +θ +A +B +C +D +1000 +5 +baseline +6.2 +0.11 (0.07) +0.03 (0.02) +0.02 (0.01) +0.09 (0.04) +1000 +5 +sparse +6.2 +0.15 (0.07) +0.01 (0.01) +0.03 (0.02) +0.13 (0.05) +1000 +10 +baseline +6.2 +0.12 (0.07) +0.03 (0.02) +0.02 (0.01) +0.09 (0.05) +1000 +10 +sparse +6.2 +0.15 (0.07) +0.01 (0.01) +0.03 (0.02) +0.13 (0.05) +2000 +5 +baseline +6.2 +0.08 (0.05) +0.02 (0.01) +0.01 (0.01) +0.06 (0.03) +2000 +5 +sparse +6.2 +0.11 (0.05) +0.01 (0.01) +0.02 (0.01) +0.09 (0.04) +2000 +10 +baseline +6.2 +0.08 (0.05) +0.02 (0.01) +0.01 (0.01) +0.06 (0.03) +2000 +10 +sparse +6.2 +0.11 (0.05) +0.01 (0.01) +0.02 (0.01) +0.09 (0.04) +Table 1: Average and Monte Carlo standard error of ∆ found in the experiment. In this table, +Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to +linear discriminant analysis, and Method D to support vector machines. Since both A and B create +a vector valued ∆ we report the maximum. +generated by linear discriminant analysis. The balance diagnostics from a support vector machine +and from an unweighted comparison yielded considerably more dispersed values. +One point of direct comparison that we may take between the different ∆ estimates is the +downstream effects of the various balancing methods with estimating the average treatment effect. +The purpose of this portion of the simulation study shows how the concentration of the distribution +for ∆ may have little to do with the actual quality of the average treatment effect estimates - the +ultimate result for causal inference. Although the concentration of the distribution for ∆ under +coarsened exact matching was the most narrow among the other densities found for ∆ under linear +discriminant analysis and support vector machines, the estimated average treatment effect is also +the most biased. +The Monte Carlo standard errors also seem to be greater than the other two +balance methods. Linear discriminant analysis also conferred a narrow concentration of ∆ statistics +yet produced the most efficient estimates of the average treatment effect, other than from the +unweighted estimate which had the smallest Monte Carlo standard errors. This result is interesting +because the unweighted diagnostics had the most dispersed values for ∆. This leads us to believe +that the scale of the ∆ statistics must be carefully considered while evaluating balance to make some +determination on which method is most suitable for evaluating treatment effects. +n +σ2 +Scenario +θ +A +B +C +D +1000 +5 +baseline +6.2 +6.20 (0.33) +6.24 (0.33) +6.20 (0.42) +6.20 (0.36) +1000 +5 +sparse +6.2 +6.20 (0.34) +6.29 (1.24) +6.21 (0.45) +6.20 (0.39) +1000 +10 +baseline +6.2 +6.20 (0.37) +6.22 (0.40) +6.20 (0.47) +6.20 (0.42) +1000 +10 +sparse +6.2 +6.19 (0.35) +6.31 (1.46) +6.20 (0.46) +6.22 (0.42) +2000 +5 +baseline +6.2 +6.19 (0.24) +6.21 (0.24) +6.20 (0.29) +6.20 (0.25) +2000 +5 +sparse +6.2 +6.20 (0.23) +6.34 (0.71) +6.21 (0.29) +6.21 (0.26) +2000 +10 +baseline +6.2 +6.21 (0.25) +6.21 (0.26) +6.19 (0.32) +6.21 (0.28) +2000 +10 +sparse +6.2 +6.21 (0.25) +6.38 (0.79) +6.21 (0.31) +6.21 (0.27) +Table 2: Summary of simulation estimates and Monte Carlo standard errors. The simulation sce- +narios corresponding to ”baseline” and ”sparse” are described in further detail in Section 6. Here, θ +refers to the population average treatment effect among the treated. In this table, Method A is the +unweighted estimate, Method B refers to coarsened exact matching, Method C is linear discriminant +analysis, and Method D is support vector machines. +12 + +Figure 1: Kernel Densities of the ∆ balancing statistics for the baseline scenario with n = 1000 and +σ2 = 10. The solid line is the distribution from the unweighted estimates, the dashed line is the +distribution for coarsened exact matching, the dotted line is the distribution for the linear propensity +score, and the dotted-dashed line for the support vector machine examples. +n +σ2 +Scenario +θ +A +B +C +D +1000 +5 +baseline +6.2 +0.952 +0.937 +0.941 +0.929 +1000 +5 +sparse +6.2 +0.944 +0.955 +0.934 +0.917 +1000 +10 +baseline +6.2 +0.941 +0.918 +0.935 +0.912 +1000 +10 +sparse +6.2 +0.955 +0.950 +0.951 +0.931 +2000 +5 +baseline +6.2 +0.931 +0.945 +0.937 +0.923 +2000 +5 +sparse +6.2 +0.956 +0.945 +0.939 +0.918 +2000 +10 +baseline +6.2 +0.959 +0.936 +0.926 +0.928 +2000 +10 +sparse +6.2 +0.953 +0.946 +0.948 +0.935 +Table 3: Summary of coverage probabilities from the simulation experiment. The simulation scenar- +ios corresponding to ”baseline”, ”interaction”, ”positivity”, and ”sparse” are described in further +detail in Section 6. Here, θ refers to the population average treatment effect among the treated. +In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, +Method C to linear discriminant analysis, and Method D to support vector machines. +Acknowledgments +The authors would like to acknowledge funding support from the following sources: the National +Institutes of Health, the National Science Foundation, the Veterans Administration and the Grohne- +13 + +Kernel Densities of Delta from a Monte-Carlo Simulation +8 +4 +8 +Density +11, +:t +: 1 +: 1 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +DeltaStepp Endowment from the University of Colorado Cancer Center. +Appendix +Proof of theorem 3.3 +We will use P and Q instead of Q0 and Q1 to ease symbolic burden on the reader. +Proof. By definition of γ: +γ(Pn0, Qn1) += +sup +f∈F +���� +� +fdPn0 − +� +fdQn1 +���� += +sup +f∈F +���� +� +fdPn0 ± +� +fdP ± +� +fdQ − +� +fdQn1 +���� +≤ +sup +f∈F +���� +� +fdPn0 − +� +fdP − +� +fdQn1 + +� +fdQ +���� + sup +f∈F +���� +� +fdP − +� +fdQ +���� += +sup +f∈F +���� +� +fdPn0 − +� +fdP − +� +fdQn1 + +� +fdQ +���� , +since γ(P, Q) = 0. Using elementary probability arguments, we have +Pr{γ(Pn0, Qn1) > δ} += +Pr +� +sup +f∈F +���� +� +fdPn0 − +� +fdP − +� +fdQn1 + +� +fdQ +���� > δ +� += +Pr +� +sup +f∈F +���� +1 +√n0 +GP +n0(f) − +1 +√n1 +GQ +n1(f) +���� > δ +� +≤ +Pr +� +sup +f∈F +|GP +n0(f)| > √n0δ/2 +� ++ Pr +� +sup +f∈F +|GQ +n1(f)| > √n1δ/2 +� +, +where GP +n0(f) and GQ +n1(f) represent the F-indexed empirical processes of P and Q, respectively. +Applying Theorem 2.14.9 in Van Der Vaart and Wellner (1996), we can bound each of the terms +as follows: +Pr +� +sup +f∈F +|GP +n0(f)| > √n0δ/2 +� +< +�D√n0δ +2 +√ +C +�C +exp(−n0δ2/2) +Pr +� +sup +f∈F +��GQ +n1(f) +�� > √n1δ/2 +� +< +�D√n1δ +2 +√ +C +�C +exp(−n1δ2/2), +where D is a constant depending only on K. Plugging these two bounds into (6.2) concludes the +proof. +14 + +Proof of Lemma 3.6 +Proof. Define γi = γFi(Pi, Qi). Then: +Pr +�� +i +γi > δ +� += 1 − Pr +�� +i +γi < δ +� +≤ 1 − Pr(γi < δ/d ∀i) += Pr(∃ i ∋ γi > δ/d) +≤ +� +i +Pr(γi > δ/d) +≤ +� +i +B(δ/d, Di, Ci), +where we have used the union bound in the second inequality. +Proof of Lemma 3.7 +Proof. Assume γFi(µ, ν) = 0 for all i. Then +γFπ(µ, ν) = sup +f π∈Fπ +���� +� +f πdµ − +� +f πdν +���� += max +ℓ +sup +f∈F +���� +� +πℓ ◦ fdµ − +� +πℓ ◦ fdν +���� += max +ℓ +sup +f∈F +���� +� +fℓdµ − +� +fℓdν +���� += max +ℓ +sup +fℓ∈Fℓ +���� +� +fℓdµ − +� +fℓdν +���� += max +ℓ +γFℓ(µ, ν) = 0. +Conversely, assuming γFπ(µ, ν) = 0 yields +γFi(µ, ν) = sup +fℓ∈Fℓ +���� +� +fℓdµ − +� +fℓdν +���� += sup +f∈F +���� +� +πℓ ◦ fdµ − +� +πℓ ◦ fdν +���� +≤ max +ℓ +sup +f∈F +���� +� +πℓ ◦ fdµ − +� +πℓ ◦ fdν +���� += γFπ(µ, ν) = 0. +This proves the first two equivalences. The third one is a byproduct of the proof. +15 + +Proof of Corollary 3.8 +Proof. To avoid cumbersome notation, let v = +1 +n0 +�n0 +j=1 f ∗(X0 +j ) − +1 +n1 +�n1 +j=1 f ∗(X1 +j ) and note vℓ = +1 +n0 +�n0 +j=1 f ∗ +ℓ (X0 +j ) − +1 +n1 +�n1 +j=1 f ∗ +ℓ (X1 +j ), then: +Pr +� +∥v∥ℓp > δ +� += Pr +� +∥v∥p +ℓp > δp� += Pr +�� +ℓ +|vℓ|p > δp +� +≤ Pr +�� +ℓ +γFℓ(Q0 +n0, Q1 +n1)p > δp +� +≤ +� +ℓ +Pr +� +γFℓ(Q0 +n0, Q1 +n1)p > δp/d +� += +� +ℓ +Pr +� +γFℓ(Q0 +n0, Q1 +n1) > δ/d1/p� +≤ +� +ℓ +B(δ/d1/p, D∗, C∗) = dB(δ/d1/p, D∗, C∗), +where the second and third inequalities follow from a slight variation of Lemma 3.6 and application +of Lemma 3.7. For the ℓ∞ case we have: +Pr +� +∥v∥ℓ∞ > δ +� +≤ Pr +� +max +ℓ +|γℓ| > δ +� +≤ +� +ℓ +B(δ, D∗, C∗), +concluding the proof. +Balance for coarsening functions +We will show the coarsened exact matching procedure belongs to a class of functions with tractable +Vapnik-Chervonenkis dimension. Consider the set S of partitions with a fixed number of elements +R. For a given partition S ∈ S, such that S = {s1, . . . , sR} define f kα +S +to be: +f kα +S (x) = +R +� +i=1 +kiαiχsi(x), +where ki ≤ k for k a constant, χsi is the indicator function of si, and α := (α1, . . . , αR) is a binary +vector, this is, αi ∈ {0, 1} for each i. In words, if x is found in si, f will return a scaled version of x +if αi is 1 and zero otherwise. +Now let F := {f kα +S }S∈S,α∈A,k≤κ, where A is the set of all binary vectors of size R and κ ∈ R. +Hence, the coarsened exact matching procedure belongs to this class of functions, since in that case +αi indicates if there are at least two members of different groups in stratum si. For any sample +point x, the weights are usually chosen in the following manner: If x is a treated unit, w1 +i = 1, +otherwise, w0 +i = (ms +1/m1)/(ms +0/m0), where s is the stratum x belongs to. Letting ki = wℓ +inℓ/mℓ +appropriately weighs matched samples. We just need to add the mild assumption that the ratio of +sample to matched size per stratum s does not grow faster than √κ, that is, nℓ/ms +ℓ ≤ √κ for all +s ∈ S, because in that case w0 +i ≤ m0/ms +0 ≤ n0/ms +0 ≤ √κ and nℓ/mℓ ≤ √κms +ℓ/mℓ ≤ √κ, so ki ≤ κ. +Finally, notice that any similar function with a smaller partition size can be expressed by a function +in F, so we can consider variable partition size as long as it does not exceed a reasonable bound R. +16 + +For any set of points of size R there is a partition S containing one point in a different element, +and therefore an α that can assign each point arbitrarily to either 0 or 1. So F shatters such set. +However, if we add an extra point, and since the number of partitions is constrained, it would have +to share partition element with a previous point, and so assignment under f kα +s . So the Vapnik- +Chervonenkis dimension of F is R. Finally, let g(Zℓ) = Qℓ +nℓ, where Qℓ +nℓ is the empirical distribution +of the sample Zℓ for group ℓ. Let k∗ be chosen as above and let (S∗, α∗) be the particular partition +and binary vector used for coarsened exact matching. Then, for the ℓth component we get: +������ +1 +m0 +� +i∈M0 +w0 +i Z0 +i,ℓ − 1 +m1 +� +j∈M1 +w1 +jZ1 +j,ℓ +������ += +������ +1 +n0 +n0 +� +i=1 +f k∗α∗ +S∗,ℓ (Z0 +i ) − 1 +n1 +n1 +� +j=1 +f k∗α∗ +S∗,ℓ (Z1 +j ) +������ +≤ sup +fℓ∈F∗ +������ +1 +n0 +n0 +� +i=1 +fℓ(Z0 +i ) − 1 +n1 +n1 +� +j=1 +fℓ(Z1 +j ) +������ += γF∗(Q0 +n0, Q1 +n1) = γF∗(g(Z0), g(Z1)). +Thus, the discrepancy among the matched samples per dimension is bounded by the γF∗ distance +of the unmatched samples. Finally, the function h(x) := κx is an envelope function of F and has +norm ∥h∥L2(µ) < ∞ as long as we assume compact domain, which is OK to do for most coarsened +exact matching cases. Then, by Theorem 2.6.7 of Van Der Vaart and Wellner (1996): +sup +µ N(ϵ, F, L2(µ)) ≤ +�K +ϵ +�C∗ +, +for some constant K and where C∗ = 2(R − 1). +This leads us to our final result: Assume ideal balance on the population probabilities holds for +γFπ, then, for the ℓth component we have: +Pr +� +� +������ +1 +m0 +� +i∈M0 +w0 +i Z0 +i,ℓ − 1 +m1 +� +j∈M1 +w1 +jZ1 +j,ℓ +������ +> δ +� +� ≤ B(δ, D, C∗). +If we are interested in the ℓp norm of the full vector instead, then, by Corollary 3.8: +Pr +� +� +� +� +� +������ +1 +m0 +� +i∈M0 +w0 +i Z0 +i − 1 +m1 +� +j∈M1 +w1 +jZ1 +j +������ +ℓp +> δ +� +� +� +� +� +≤ dB(δ/d1/p, D, C∗), +for finite p ≥ 1. While +Pr +� +� +� +������ +1 +m0 +� +i∈M0 +w0 +i Z0 +i − 1 +m1 +� +j∈M1 +w1 +jZ1 +j +������ +ℓ∞ +> δ +� +� +� ≤ dB(δ, D, C∗). +Balance using propensity scores +Recall e(Z) = P(T = 1 | Z), and that we are assuming Z | T = ℓ ∼ N(µℓ, Σ). Let pℓ be the +probability density function of N(µℓ, Σ), that is, the gaussian density, then by the density version +of Bayes’ Theorem we have +p(T = 1 | Z = z) = +p1P(T = 1) +p1P(T = 1) + p0P(T = 0). +17 + +Therefore, we can express the logit of e(Z) as +logit(e(Z)) = log +� +e(Z) +1 − e(Z) +� += log +�p1P(T = 1) +p0P(T = 0) +� +. +Now define Lk := logit(e(Zk)), then the matching procedure is based on the difference |Li − Lj|. +Given the above computation and after a few straightforward steps we get +|Li − Lj| = +��(µ1 − µ0)T Σ−1(Zi − Zj) +�� += |f ∗(Zi) − f ∗(Zj)| , +where f ∗(x) = wT x for w ∈ Rp. +Notice the vector w is the same as the one used for linear +discriminant analysis so, adding an offset parameter, it will be useful to think of f ∗ as a hyperplane. +Let M j +0 be the control units assigned to treatment unit j. We make the assumption that there +is a fixed number of assigned controls to each treatment, and so m0 = |M j +0|m1. Then +∆ := +������ +1 +m1 +� +j∈M1 +logit(ej) − 1 +m0 +� +i∈M0 +logit(ei) +������ += +������ +1 +m1 +� +j∈M1 +Lj − +� +j∈M1 +1 +m0 +� +i∈M j +0 +Li +������ += +������ +� +j∈M1 +� +� 1 +m1 +Lj − 1 +m0 +� +i∈M j +0 +Li +� +� +������ += +������ +� +j∈M1 +� +� 1 +m1 +� +i∈M j +0 +Lj +|M j +0| +− 1 +m0 +� +i∈M j +0 +Li +� +� +������ += +������ +� +j∈M1 +� +i∈M j +0 +� +Lj +m1|M j +0| +− Li +m0 +������� += +������ +� +j∈M1 +� +i∈M j +0 +1 +m0 +(Lj − Li) +������ += +������ +� +j∈M1 +� +i∈M j +0 +1 +m0 +(f ∗(Zj) − f ∗(Zi)) +������ += +������ +1 +m1 +� +j∈M1 +f ∗(Zj) − 1 +m0 +� +i∈M0 +f ∗(Zi) +������ +. +That is, we can express the difference of means of logits in terms of the difference of means of the +discriminant functions. Let p be the dimension of the covariates, and let F be the collection of +p-dimensional hyperplanes, notice f ∗ ∈ F. The Vapnik-Chervonenkis dimension of F is known to +be p + 1 (Mohri et al., 2018). We would like to bound ∆ in terms of γ but we first need some +adjustments to f ∗. +The matching procedure determines a set ZM = {Zk | k ∈ M} of matched samples, where +M = M0 ∪ M1. By the Gaussian assumption the Zs are sampled from a Gaussian mixture so the +probability of two sample points being the same is zero. Hence there is an ϵ > 0 such that for all +18 + +k ∈ M, Z ∩ Bϵ(Zk) = {Zk}, that is, each ϵ ball centered around a matched sample does not contain +any other sample point (here Z is the sample set). Let Sϵ = ∪kBϵ(Zk). Note Sϵ is a measurable +set. Let βSϵ(x) := xχSϵ(x), this function maps points to zero if unmatched and to themselves if +matched. Furthermore, let βℓ(x) := mℓ +nℓ χMℓ(x) + χM C +ℓ (x), for ℓ ∈ {0, 1}. Each βℓ scales elements in +Mℓ by the factor mℓ +nℓ and leaves the rest untouched. +Notice f ∗ +M := f ∗ ◦ β1 ◦ β0 ◦ βSϵ sends Zk to mℓ +nℓ wT Zk if k ∈ Mk and to 0 otherwise. Then we can +express ∆ as +∆ = +������ +1 +m1 +� +j∈M1 +f ∗(Zj) − 1 +m0 +� +i∈M0 +f ∗(Zi) +������ += +������ +1 +n1 +n1 +� +j=1 +f ∗ +M(Zj) − 1 +n0 +n0 +� +i=1 +f ∗ +M(Zi) +������ +. +Now, consider the set FM := {f ◦β1◦β0◦βS|f ∈ F, S ∈ Σ}, where Σ is the set of measurable sets +according to the distribution of the Zs. The Vapnik-Chervonenkis dimension for FM is the same as +that of F, that is, p + 1. To see this we notice that the standard derivation for the hyperplane case +involves shattering the standard basis B in Rp. With probability one, no sample point will equal a +standard basis vector, so there is an ϵ′ > 0 for which we can create a set s = ∪x∈BBϵ′(x) such that +s ∈ Σ and no sample point is in s. Considering the functions {fν} in F used to shatter B and using +s, we can use the functions {fν ◦ β1 ◦ β0 ◦ βs} in FM to also shatter B. So the Vapnik-Chervonenkis +dimension is at least p + 1. Since the functions β1, β0, and βS are either zero or a scaled identity, +we don’t get any complexity and the dimension is no larger than p + 1, so it is indeed p + 1. For the +envelope function, we can choose h(x) =< we, x >. The norm of we must be large enough to keep +a p + 1 Vapnik-Chervonenkis dimension. Since the vectors used to ensure such a dimension have +norm p + 1, the norm of we must be at least p + 1. So we can choose any large constant C > p + 1. +Since we are interested in vectors of the form w = Σ−1∆µ, we have ∥w∥ ≤ ∥S−1∥F ∥∆µ∥2, so the +user has to choose constants that bound each of these norms. Also, we must assume the covariates +themselves are bounded, this ensures a finite norm for h. +Finally, we have +∆ = +������ +1 +n1 +n1 +� +j=1 +f ∗ +M(Zj) − 1 +n0 +n0 +� +i=1 +f ∗ +M(Zi) +������ +≤ sup +f∈FM +������ +1 +n1 +n1 +� +j=1 +f(Zj) − 1 +n0 +n0 +� +i=0 +f(Zi) +������ += γFM (Q0 +n0, Q1 +n1). +Assuming Ideal Balance on the population probabilities, and applying Theorem 2.6.7 of Van Der Vaart +and Wellner (1996) in conjunction with Theorem 3.3, yields +Pr{∆ > δ} ≤ B(δ, D, 2p). +Covering number bound for Reproducing Kernel Hilbert Spaces +We refer the reader to Wahba (1990); Berlinet and Thomas-Agnan (2011); Steinwart and Christmann +(2008) for nice overviews on reproducing kernel Hilbert spaces. Roughly speaking, a mapping k : +X × X → R is said to be the reproducing kernel associated to the reproducing kernel Hilbert space +H if it satisfies the following properties: (a) k(·, x) ∈ H for any x ∈ X; (b) f(x) = ⟨f, k(·, x)⟩H for +all f ∈ H and x ∈ X. Property (b) is commonly referred to as the reproducing property. +19 + +To apply Theorem 3.3 to the reproducing kernel case, we will need to directly bound the covering +number based on arguments different from Vapnik-Chervonenkis theory. Define the space +Hm +q (Rp) = {f ∈ Lq(Rp) | Djf ∈ Lq(Rp) ∀j ∈ {1, . . . , m}; ∥f∥q < ∞}, +where +∥f∥q = +� +0≤|α|≤s +∥Dαf∥Lq +and Dα denotes partial derivatives in the sense of distributions. Then as a consequence of Theorem +1 of Nickl and P¨otscher (2007), if m − q/p > 0, then +N(ϵ, H, ∥ · ∥q) ≤ b1ϵ−q, +while if m − q/p < 0, +N(ϵ, H, ∥ · ∥q) ≤ b2ϵ−p/m, +Based on this result, Theorem 3.3 can then be applied to prove a convergence rate under ideal +balance. +Note that this does not cover the Gaussian kernel case, because the Gaussian kernel +is infinitely differentiable, so the space Hm +q (Rp) does not apply. For the reader interested in the +Gaussian case, we refer them to the recent paper by Steinwart and Fischer (2020). +References +Abadie, A. and G. W. Imbens (2006). Large sample properties of matching estimators for average +treatment effects. Econometrica 74(1), 235–267. +Abadie, A. and G. W. Imbens (2011). Bias-corrected matching estimators for average treatment +effects. Journal of Business & Economic Statistics 29(1), 1–11. +Abadie, A. and G. W. Imbens (2016). Matching on the estimated propensity score. Economet- +rica 84(2), 781–807. +Baudat, G. and F. Anouar (2000). Generalized discriminant analysis using a kernel approach. Neural +computation 12(10), 2385–2404. +Berlinet, A. and C. Thomas-Agnan (2011). Reproducing kernel Hilbert spaces in probability and +statistics. Springer Science & Business Media. +Chan, K. C. G., S. C. P. Yam, and Z. Zhang (2016). Globally efficient non-parametric inference +of average treatment effects by empirical balancing calibration weighting. Journal of the Royal +Statistical Society: Series B (Statistical Methodology) 78(3), 673–700. +Chervonenkis, A. and V. Vapnik (1971). Uniform convergence of the frequencies of occurrence of +events to their probabilities(uniform convergence of frequencies of events in independent tests +sequence to probabilities of occurrence). Teoriia Veroiatnostei I Ee Primeneniia 16, 264–279. +Hainmueller, J. (2012). Entropy balancing for causal effects: A multivariate reweighting method to +produce balanced samples in observational studies. Political Analysis 20(1), 25–46. +Hansen, B. B. (2008). The prognostic analogue of the propensity score. Biometrika 95(2), 481–488. +Hazlett, C. (2016). Kernel balancing: A flexible non-parametric weighting procedure for estimating +causal effects. +Ho, D. E., K. Imai, G. King, and E. A. Stuart (2007). Matching as nonparametric preprocessing for +reducing model dependence in parametric causal inference. Political analysis 15(3), 199–236. +20 + +Holland, P. W. (1986). Statistics and causal inference. Journal of the American statistical Associa- +tion 81(396), 945–960. +Iacus, S. M., G. King, and G. Porro (2011). Multivariate matching methods that are monotonic +imbalance bounding. Journal of the American Statistical Association 106(493), 345–361. +Imai, K. and M. Ratkovic (2014). +Covariate balancing propensity score. +Journal of the Royal +Statistical Society: Series B (Statistical Methodology) 76(1), 243–263. +Imbens, G. W. and D. B. Rubin (2015). Causal inference in statistics, social, and biomedical sciences. +Cambridge University Press. +Kallus, N. (2020). Generalized optimal matching methods for causal inference. Journal of Machine +Learning Research 21(62), 1–54. +Kosorok, M. R. (2007). Introduction to empirical processes and semiparametric inference. Springer +Science & Business Media. +Mohri, M., A. Rostamizadeh, and A. Talwalkar (2018). Foundations of machine learning. MIT +press. +Neyman, J. (1923). Sur les applications de la th´eorie des probabilit´es aux experiences agricoles: +Essai des principes. Roczniki Nauk Rolniczych 10, 1–51. +Nickl, R. and B. M. P¨otscher (2007). Bracketing metric entropy rates and empirical central limit +theorems for function classes of besov-and sobolev-type. Journal of Theoretical Probability 20(2), +177–199. +Rosenbaum, P. R. and D. B. Rubin (1983). The central role of the propensity score in observational +studies for causal effects. Biometrika 70(1), 41–55. +Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized +studies. Journal of educational Psychology 66(5), 688. +Rubin, D. B. (1976). Multivariate matching methods that are equal percent bias reducing, i: Some +examples. Biometrics, 109–120. +Rubin, D. B., E. A. Stuart, et al. (2006). Affinely invariant matching methods with discriminant +mixtures of proportional ellipsoidally symmetric distributions. The Annals of Statistics 34(4), +1814–1826. +Rubin, D. B. and N. Thomas (1992). Affinely invariant matching methods with ellipsoidal distribu- +tions. The Annals of Statistics, 1079–1093. +Salimi, B. and D. Suciu (2016). Zaliql: A sql-based framework for drawing causal inference from big +data. arXiv preprint arXiv:1609.03540. +Sekhon, J. S. (2008). Multivariate and propensity score matching software with automated balance +optimization: the matching package for r. Journal of Statistical Software, Forthcoming. +Steinwart, I. and A. Christmann (2008). Support vector machines. Springer Science & Business +Media. +Steinwart, I. and S. Fischer (2020). A closer look at covering number bounds for gaussian kernels. +Journal of Complexity, 101513. +Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical +science: a review journal of the Institute of Mathematical Statistics 25(1), 1. +21 + +Van Der Vaart, A. W. and J. A. Wellner (1996). Weak convergence. In Weak convergence and +empirical processes, pp. 16–28. Springer. +Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Math- +ematics. +Wang, T., M. Morucci, M. U. Awan, Y. Liu, S. Roy, C. Rudin, and A. Volfovsky (2017). Flame: A +fast large-scale almost matching exactly approach to causal inference. +Wang, Y. and J. R. Zubizarreta (2019). Minimal dispersion approximately balancing weights: asymp- +totic properties and practical considerations. Biometrika. +Wong, R. K. and K. C. G. Chan (2018). Kernel-based covariate functional balancing for observational +studies. Biometrika 105(1), 199–213. +Zhu, Y., J. S. Savage, and D. Ghosh (2018). A kernel-based metric for balance assessment. Journal +of causal inference 6(2). +Zolotarev, V. M. (1984). +Probability metrics. +Theory of Probability & Its Applications 28(2), +278–302. +Zubizarreta, J. R. (2015). Stable weights that balance covariates for estimation with incomplete +outcome data. Journal of the American Statistical Association 110(511), 910–922. +22 + diff --git a/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce207cee0722e6b2ec9ff6d8ace03b44b8f093af --- /dev/null +++ b/b9AyT4oBgHgl3EQf-PqC/content/tmp_files/load_file.txt @@ -0,0 +1,1108 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf,len=1107 +page_content='An empirical process framework for covariate balance in causal inference Efr´en Cruz Cort´es Michigan Institute for Data Science Center for the Study of Complex Systems University of Michigan encc@umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='edu Kevin Josey Department of Biostatistics Harvard T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan School of Public Health kjosey@hsph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='harvard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='edu Fan Yang Department of Biostatistics and Informatics Colorado School of Public Health fan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='yang@cuanschutz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='edu Debashis Ghosh Department of Biostatistics and Informatics Colorado School of Public Health debashis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='ghosh@cuanschutz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='edu Abstract We propose a new perspective for the evaluation of matching procedures by considering the complexity of the function class they belong to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Under this perspective we provide theoretical guarantees on post-matching covariate balance through a finite sample con- centration inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We apply this framework to coarsened exact matching as well as matching using the propensity score and suggest how to apply it to other algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Simulation studies are used to evaluate the procedures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' keywords: Causal effects, empirical distribution function, entropy metric, superpopulation, tail inequality, Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 1 Introduction Causal inference is a central goal for outcomes and policy research, particularly in the medical field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Among the many topics in this broad field of study are methods for evaluating treatment effects with non-randomized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There is an abundance of observational data in nearly every discipline of science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, bias induced by confounding is inherent in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this context, the researcher must account for every potential confounder in some way before they can establish causality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While randomization remains the gold-standard for inference, as there is no confounding by definition, randomizing individuals into treatment groups is often cost prohibitive and sometimes unethical for certain study designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Under the potential outcomes framework (Neyman, 1923;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin, 1974), Rosenbaum and Rubin (1983) were able to describe how the propensity score plays a key role in causal effect estimation and inference with observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The propensity score is defined as the probability of receiving a treatment given a set of measured covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Under strong ignorabiligy assumption, the propensity score removes bias attributable to confounding due to its property as a balancing score (Rosenbaum and Rubin, 1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' With this result in mind, numerous methods for causal effect estimation were 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='00889v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='ST] 2 Jan 2023 subsequently developed around the propensity score, with covariate balance serving as the primary objective (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', Imai and Ratkovic (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zubizarreta (2015);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, the results presented by Rosenbaum and Rubin (1983) about the propensity score are derived in an asymptotic setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This means that estimates of the propensity score may not adequately balance the covariate distribution in finite settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Therefore, many methods are resolved by iterating between fitting a model for the propensity score and evaluating balance diagnostics on the propensity score adjusted covariates before estimating the treatment effect of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Some methods for evaluating balance diagnostics have been proposed by Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2007) and Sekhon (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The propensity score literature has mostly diverged into two overlapping yet distinct domains - one that uses the propensity score to derive balancing weights (Hainmueller, 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imai and Ratkovic, 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', 2016) and the other that uses a balancing score, such as the propensity score, to construct a matched cohort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Recently, a multivariate matching approach using coarsened values of the observed covariates was developed by Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They refer to their algorithm as coarsened exact matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' One of the primary aims of their method was to eliminate the iterative step of re-matching participants until an acceptable amount of balance is achieved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Coarsened exact matching is quite simple in nature and proceeds using the following high-level heuristic: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For each confounding variable, coarsen it into a certain number of categories;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Create strata based on the possible combinations of the coarsened values;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Compute a causal effect by comparing the outcomes of the treatment groups within the strata and adjusting for the stratum effect appropriately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The theoretical justification provided by Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) for coarsened exact matching is a concept they term monotonic imbalance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They show that bounding the distance between confounders to be small leads to matching procedures that are more flexible than procedures based on the equal percent bias reduction theory developed by Rubin and collaborators (Rubin, 1976;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin and Thomas, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' One of the main advantages of coarsened exact matching is that it becomes amenable to large-scale database querying approaches to peforming causal inference: see Salimi and Suciu (2016) as well as Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, fewer technical results exist for matching estimators than for other approaches, such as inverse probability weighting estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Abadie and Imbens (2006) have studied the large-sample asymptotics of matching estimators and found that in general, matching-based estimators of average causal effect did not have the usual n1/2 convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The intuition is that the matching algorithm introduces a bias into causal effect estimation that did not vanish asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This bias term also increased with the number of confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Bias-corrected estimators have been proposed by Abadie and Imbens (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Abadie and Imbens (2016) performed a theoretical study of the asymptotic behavior of average causal effect estimators that match using the estimated propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Conceptually, achieving covariate balance is a multivariate concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If we let L(Z | T = 0) and L(Z | T = 1) denote the probability laws for the confounders conditional on treatment status then, ideally, as in the case of perfect randomization, these distributions are equal in some sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We refer to this sense of equality as covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Most covariate balance methods do not take the joint distribution of confounders into account but rather seek to match moments of the marginal distributions for the confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For example, Imai and Ratkovic (2014) proposed matching the first and second moments of covariates in their algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Practically, one-dimensional diagnostics such as mean comparisons of confounders between treatment groups or Kolmogorov-Smirnov statistics are used to evaluate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wang and Zubizarreta (2019) have argued that due to the inherent complexity in attempting to achieve multivariate balance, one should instead strive to achieve approximate balance between confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this paper, we propose a new theoretical approach to evaluating and understanding covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We introduce a distance metric to assess how close two multivariate distributions are from 2 each other and define covariate balance as having zero distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This metric is defined in terms of the function family the matching procedure belongs to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Subsequent assessment of balance relies on understanding the behavior of the function classes in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We demonstrate the following in the current paper: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The use of function classes fits naturally with the use of probability metrics (Zolotarev, 1984) for comparing probability laws and in this instance, multivariate distributions for confounders conditional on treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Results from empirical process theory (Van Der Vaart and Wellner, 1996;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Kosorok, 2007) can subsequently be used to study the behavior of function classes and to make probabilistic statements on the rates of convergence of matching procedures under ideal balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Ideal balance provides a new theoretical out-of-sample justification for the methodology of Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) and can be used for the evaluation of other algorithmic strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Based on the framework, one can view the techniques in this paper as being akin to developing a scal- able strategy for achieving covariate balance that has relatively low complexity from the viewpoint described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2 Background and Preliminaries 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 Data Structures and Causal Estimands Let the data be represented as (Yi, Ti, Zi), i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , n, a random sample from the triple (Y, T, Z), where Y denotes the response of interest, T denotes the treatment group, and Z is a p-dimensional vector of covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We assume that T takes values in {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We now briefly review the potential outcomes framework (Rubin, 1974;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Holland, 1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let {Y (0), Y (1)} denote the potential outcomes for all n subjects, and the observed response be related to the potential outcomes by Y = (1 − T)Y (0) + TY (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In the potential outcomes framework, causal effects are defined as within-individual contrasts based on the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' One popularly used estimand is the average causal effect, defined as ACE = 1 n n � i=1 (Yi(1) − Yi(0)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Many assumptions are needed for performing valid causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' These include the con- sistency assumption, the treatment positivity assumption, and the strongly ignorable treatment assumption (Rosenbaum and Rubin, 1983), defined as T ⊥ {Y (0), Y (1)} | Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) Assumption (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) means that treatment assignment is conditionally independent of the set of po- tential outcomes given the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Treatment positivity refers to 1 > P(T = 1 | Z) > 0 for all values of Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, the intuition is that any individual can potentially receive either treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, the consistency assumption ensures that the observed outcome and the potential outcome under the observed treatment coincide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' As described recently by Imbens and Rubin (2015), causal inference proceeds by modelling the assignment mechanism using observed covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A quantity that naturally arises from this mod- elling is the propensity score (Rosenbaum and Rubin, 1983), the probability of receiving treatment given confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The propensity score is defined as e(Z) = P(T = 1 | Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3 Given the treatment ignorability assumption in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1), it also follows by Theorem 3 of Rosenbaum and Rubin (1983) that treatment is strongly ignorable given the propensity score, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' T ⊥ {Y (0), Y (1)} | e(Z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Based on these assumptions and definitions, we can formulate causal inference using the following approach: (a) define an appropriate causal estimand;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (b) formulate a propensity score model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (c) check for covariate balance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (d) if (c) holds, estimate the causal estimand by conditioning on the propensity scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We note that steps (b) and (c) tend to be iterative in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While the results in this paper pertain to propensity-matched analyses, they apply to more general matching strategies as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 Previous results on covariate balance In terms of covariate balance, a major class of theoretical results come from work on equal percent bias reduction procedures (Rubin and Thomas, 1992, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Equal percent bias reduction means that a certain type of covariate matching will reduce bias in all dimensions of Z by the same amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Define a matching method to be affinely invariant if the matching procedure is invariant to affine transformations of the covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If Z given T is assumed to have a so-called elliptically symmetric distribution, then Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and Corollaries 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 of Rubin and Thomas (1992) apply so that any affinely invariant matching method will be equal percent bias reducing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Examples of elliptically symmetric distributions include the multivariate normal and t distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While elliptical symmetry of the confounders given treatment group is a restrictive assumption, this was relaxed in more recent work by Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There, they assumed that the conditional distribution of Z given T is a discriminant mixture of elliptically symmetric distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2006) prove that a generalization of equal percent bias reducing holds for this setup as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, for equal percent bias reducing methods, we have a guarantee that attempting to increase balance in one variable will not lead to distortions in balance for other variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, the assumptions needed for equal percent bias reducing to hold seem restrictive in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) took another approach by focusing on in-sample covariate discrepancies and requiring that the maximum discrepancy in sample means between treated and control subjects be bounded above by a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They generalize this to arbitrary functions of the data, which they term imbalance bounding and define monotonic imbalance bounding matching methods to be those in which the discrepancies between a monotonic function applied to a variable is bounded above by a confounder- specific term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, one can be more stringent in the balance in variable without impacting the maximal imbalance across all confounders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There are many important implications of requiring the monotonic imbalance bounding property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' First, many methods of confounder adjustment, such as nearest-neighbor or caliper matching as defined in Cochran and Rubin (1973), are not monotonic imbalance bounding because they fix the number of treated and control observations within strata, while monotonic imbalance bounding methods imply variable numbers of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' By contrast, if the caliper matching procedure were to allow for different calipers for each confounder, then this would be monotonic imbalance bounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) also show that a key goal in causal effect estimation is to reduce model dependence (Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', 2007), meaning that there should not be extrapolation of potential outcomes to regions in the covariate space where there are no observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Under some assumptions on the model for potential outcomes, they show that for monotonic imbalance bounding methods, the model dependence is upper bounded by terms involving an imbalance parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In addition, the estimation error for average causal effects using monotonic imbalance bounding matching methods can also be upper bounded by terms involving this parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' As a concrete example of a new monotonic imbalance bounding method, Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) propose a coarsened exact matching algorithm for creating strata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' It proceeds as follows: 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For each variable Zj (j = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , p), coarsen it into a function Cj(Zj) which takes on fewer values than the unique values of Zj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Perform exact matching between treated and control observations using the vector (C1(Z1), C2(Z2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , Cp(Zp)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This effectively creates strata S1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , SJ based on the unique combinations of (C1(Z1), C2(Z2), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , Cp(Zp)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Discard strata in which there are only observations with T = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For strata with only observa- tions from the T = 1 population, extrapolate the potential outcome Y (0) using the available controls or discard by restricting the causal effect of interest on the treated units for which causal effect can be identified without further modelling based assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For strata with both treated and control observations, compare the outcome between the two populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) have developed very easy-to-use software packages for implementing coarsened exact matching in R and Stata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They show that the coarsened exact matching approach satisfies the monotonic imbalance bounding property with respect to a variety of functionals of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In addition, they provide a very intuitive explanation for what coarsened exact matching attempts to mimic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While classical propensity score approaches attempt to mimic a randomized study, analyses using coarsened exact matching will mimic randomized block designs, where the blocks are by definition predictive of the potential outcomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' It is well-known that in this situation, randomized block designs will yield more efficient estimators (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', Box, Hunter and Hunter, 1978).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The other approach that has become of recent interest has been to incorporate covariate balance as part of the causal effect estimation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For example, Imai and Ratkovic (2014) propose using generalized methods of moments for causal effect estimation in which covariate balance is treated as a constraint in the procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2016) propose the use of calibration estimators for causal effect estimation in which covariate balance constraints lead to a constrained Lagrangian dual optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For these approaches, the authors are able to develop consistency and asymptotic normality results for the causal effect estimators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' As described in more detail in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1, we will be using an integral probability metric to assess covariate balance among the two populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In Kallus (2020) a similar metric is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They define such a metric as the target error to be minimized for obtaining optimal weighting coefficients when estimating the sample average treatment effect on the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While our approaches are complementary, there are several notable differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' First, in Kallus (2020), they use their metric to find weights that correspond to known matching methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The functions involved in their metric represent the expected relationship between potential outcomes and covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In our case, we take any matching procedure and given the measure of match, bound it by the probability metric involving functions representing the matching procedure itself, and provide probability bounds to how good the matching is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In addition, in Kallus (2020), they assume a fixed population and therefore no randomness in covariate values, while our concern indeed focuses on the sample distribution of these covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The difference between these two approaches is further explained in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 Modes of inference and covariate balance In looking at the various proposals for accommodating covariate balance, it is useful to reconsider the ways in which one can perform causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imbens and Rubin (2015) have a nice overview on the distinction between finite-population and superpopulation modes for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The finite-population mode of causal inference treats the sampled units as the population of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The stochastic nature of the experiment is due solely to the treatment mechanism so that randomness occurs only with respect to the treatment assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If one adopts the finite-sample point of view 5 for causal inference, then one can use a randomization-based approach to performing inference for causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' By contrast, the superpopulation mode of inference considers two sources of variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The first is due to the randomness in the treatment assignments, and the second is due to the fact that the sampling units are a random sample from a superpopulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, this approach posits a superpopulation from which the sampling units come from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Revisiting the previous work from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2, the equal percent bias reduction theory and the work of Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) posit results about covariate balance assuming a finite-population mode for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, covariate balance results of these methods will involve subsampling and matching from the sampling units, and the balance occurs with respect to the matched sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The concept of balance we introduce in the next section can accommodate both modes of inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3 Main Results 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 Ideal Balance In this section, we wish to study covariate balance from the viewpoint of comparing the distributions L(Z | T = 0) and L(Z | T = 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' To do so, we must determine how this comparison is done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We do this by first defining probability pseudometrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 (Pseudometric).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let A be the set of probability measures defined on a shared mea- surable space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A function m : A × A → [0, ∞) is a pseudometric on A if, for all µ, ν, λ ∈ A, the following conditions are satisfied: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' m(µ, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' m(µ, ν) = m(ν, µ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' m(µ, ν) ≤ m(µ, λ) + m(λ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note these properties almost make m a metric on A, but notably we do not assume that if the distance between two elements is zero, then the two elements are the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the purpose of this paper, we will abuse terminology and refer to pseudometrics as metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The class of metrics we will work with in this article is given by γF(µ, ν) = sup f∈F ���� � fdµ − � fdν ���� , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) where F is a class of functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1), γF(µ, ν) is referred to by Zolotarev (1984) as an example of a probability metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In our notation, we drop the dependency of γF on F and write it as γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We now define ideal balance as being based on (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 (Ideal Balance).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let µ and ν be distributions on the same probability space and m a pseudometric, then we say µ and ν satisfy Ideal Balance with respect to m if m(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' When µ and ν are the conditional distributions of the covariates given the treatment group, as in Section 2, ideal balance is a restriction on the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If these are instead the empirical distributions of the data, ideal balance is a sample restriction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Matching methods, in a sense, intend to achieve ideal balance on the matched data for some m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note that at this stage, we have only dealt with population distributional laws and have not described how to estimate or compute these quantities with real data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In practice, we would not expect ideal balance to hold in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, it does serve as a useful benchmark through which we can study the behavior of various functional constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Here, the function spaces F in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) play the role of the constraints;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' more complex function spaces correspond to more constraints on the joint distributions of Z|T = 1 and Z|T = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 A Concentration Inequality Result Let F be a function space and ∥ · ∥ a norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The covering number N(ϵ, F, ∥ · ∥) is the minimum number of ∥ · ∥-balls of radius ϵ needed to cover F, where a ball centered around f ∈ F is the set {g | ∥f − g∥ ≤ ϵ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Intuitively, one can think of the covering number as a measure of the complexity of the function class F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For a measure µ the norm Lr(µ)-norm, for r ≥ 1, is defined as ∥f∥r Lr(µ) = � |f|rdµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Throughout the paper, we will assume F is uniformly bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note that if µ is any probability measure, and under uniform boundedness, we can endow F with the norm Lr(µ) without dropping any of its elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Unless otherwise specified, we assume the range of the functions in F is [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, for a function class F, an envelope function of F is defined as any function h such that for all f in F, the inequality |f(x)| ≤ |h(x)| is satisfied for any x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let {Zi}n i=1 be a sample where each Zi has distribution Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We denote the empirical distribution by Qn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The F-indexed empirical process GQ n is defined as the map taking any f ∈ F to GQ n (f) = √n �� fdQn − � fdQ � = 1 √n n � i=1 � f(Zi) − � fdQ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let Q0 n0 and Q1 n1 be two empirical distributions of observations sampled from Q0 and Q1, respectively, and assume ideal balance holds for Q0 and Q1 with respect to γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let M be the collection of probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If there exists constants C and K such that F satisfies sup µ∈M N(ϵ, F, ∥ · ∥Lr(µ)) ≤ �K ϵ �C , for every 0 < ϵ < C, then Pr{γ(Q0 n0, Q1 n1) > δ} ≤ � Dδ 2 √ C �C � nC/2 0 exp(−n0δ2/2) + nC/2 1 exp(−n1δ2/2) � , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2) where D is a constant depending on K only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The proofs of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 and subsequent results are found in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Throughout the paper, we will use Bn(δ, D, C) for the bound in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3, where the subscript n reminds us of the dependence on the sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We note that the bound in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2) is nonasymptotic and will hold for any sample size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this framework, the function classes play an important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 gives a bound in terms of the entropy number of the function class in question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In particular, low- complexity functions are favored using this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A key technical point is ensuring that the covering number condition in the theorem is satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' To do so, we will primarily use results from Vapnik-Chervonenkis theory (Chervonenkis and Vapnik, 1971) to determine appropriate covering numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In most cases the function classes of interest are not real-valued but vector-valued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The following straightforward results can be used to deal with these cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let {Fi}d i=1 be a collection of real-valued function spaces and (P i, Qi) satisfy ideal balance under γFi for each 1 ≤ i ≤ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let (Pi, Qi) denote their respective empirical distributions with implicit sample size dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then Pr � d � i=1 γFi(Pi, Qi) > δ � ≤ d � i=1 B(δ/d, Di, Ci).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 7 Now, consider the collection {Fi}d i=1, where each Fi is a real-valued function space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Define F = {f = (f1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , fd)T | fi ∈ Fi for all i}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let πℓ be the ℓth coordinate projection, that is, for a finite dimensional vector x = (x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , xd), πℓ(x) = xℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, define Fπ = {πℓ ◦ f | f ∈ F, 1 ≤ ℓ ≤ d}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note the elements of Fπ are real-valued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The following lemma tells us we can either assume µ and ν satisfy ideal balance with respect to each of γFi, or that they satisfy ideal balance with respect to γFπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let F, {Fi}d i=1, and Fπ be as above, and let µ and ν denote two probability measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then the following are equivalent: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' µ and ν satisfy ideal balance with respect to γFπ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' µ and ν satisfy ideal balance with respect to each γFi, 1 ≤ i ≤ d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' maxi γFi(ν, µ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The following corollary will be very useful: Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let F and Fπ be as above, and Fi = F∗ for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Assume F∗ has polynomial covering number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let {X0 j }n0 j=1 ∼ Q0 and {X1 j }n1 j=1 ∼ Q1, where Q0 and Q1 satisfy ideal balance with respect to γFπ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Fix f ∗ ∈ F, then Pr � � � ������ 1 n0 n0 � j=1 f ∗(X0 j ) − 1 n1 n1 � j=1 f ∗(X1 j ) ������ ℓp > δ � � � ≤ dB(δ/d1/p, D∗, C∗), for finite p ≥ 1, and Pr � � ������ 1 n0 n0 � j=1 f ∗(X0 j ) − 1 n1 n1 � j=1 f ∗(X1 j ) ������ ℓ∞ > δ � � ≤ dB(δ, D∗, C∗), where D∗, C∗ depend only on F∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='9 (Vapnik-Chervonenkis Dimension).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Vapnik-Chervonenkis dimension of a func- tion class F on an ambient set X is the cardinality of the largest subset shattered by F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A function class F shatters a set S ∈ X if for each possible 0 − 1 labeling of the elements of S there is at least one function f ∈ F that realizes such labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A key result we will use is an application of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7 of Van Der Vaart and Wellner (1996), which implies that if a function class G has finite Vapnik-Chervonenkis dimension v, then sup µ N(ϵ, G, L2(µ)) ≤ �K ϵ �C∗ , where C∗ = 2v − 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 4 Examples 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 Balance on coarsened function classes Consider coarsened exact matching as described in Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let Z0 = {Z0 i }n0 i=1 and Z1 = {Z1 j }n1 j=1 be the control and treatment samples, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In coarsened exact matching we create a partition of the sample space and match samples which are found in the same element 8 of the partition, and discard samples in subsets without samples from the opposite group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We are interested in the quantity ∆ = 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j , where mℓ is the number of matched samples for the ℓth group, Mℓ is its index set, and {w0 i , w1 j}i∈M0,j∈M1 are weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In the supplementary material we describe how to express this matching procedure as a function f on the variables Z0 i and Z1 j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This allows us to express ∆ in terms of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We further specify the function space F for which ∥∆∥ ≤ γF(Q0 n0, Q1 n1) holds for an appropriate norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Using the properties of F and provided the bound above, we can derive our results of interest: Pr(|∆k| ≥ δ) ≤ B(δ, D, C∗), for a constant C∗ and where ∆k is the kth component of ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Similarly, Pr(∥∆∥ℓp ≥ δ) ≤ dB(δ/d1/p, D, C∗) and Pr(∥∆∥ℓ∞ ≥ δ) ≤ dB(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 Covariate balance on the linear propensity score As discussed in Section 3, there has been a lot of work on developing matching results based on linear discriminant analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' That is, we assume that P(Z | T = ℓ) follows N(µℓ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Under this model, the metric for consideration is the logit of the propensity score (see Stuart (2010)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In the supplementary material we show the distance |logit(e(Z)) − logit(e(Z′)| can be expressed in terms of the linear discriminant analysis hyperplance vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Indeed, if p is the dimension of the covariates, we can create a function space F derived from hyperplanes and with Vapnik-Chervonenkis dimension p + 1 such that ∆ = ������ 1 m0 � i∈M0 logit(e(Zi)) − 1 m1 � j∈M1 logit(e(Zj)) ������ ≤ γF(Q0 n0, Q1 n1), allowing us, using Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3, to determine the bound of interest: Pr{∆ > δ} ≤ B(δ, D, 2p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 Covariate balance using kernels Many authors (Hazlett, 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wong and Chan, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', 2018) have advocated for the use of kernel methods for matching and evaluating covariate balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This corresponds to assuming that F in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) represents a Reproducing Kernel Hilbert space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Further details about these function spaces can be found in the supplementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' To apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 to the kernel setting, we will note there exists a version of linear discrimi- nant analysis from section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 that can be extended to the reproducing Kernel Hilbert Space setting (Baudat and Anouar, 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let H be a reproductive kernel Hilbert space and ∥ · ∥H the norm associated to it, then a natural metric to consider for a kernelized matching procedure would be ∆H = ������ 1 m0 � i∈M0 f(Zi) − 1 m1 � j∈M1 f(Zj) ������ H , 9 which represents a functional generalization of ∆ from Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2, and where f ∈ H is an appropriate function chosen by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then ∆H ≤ γF(Q0 n0, Q1 n1), and we can use the previous results with a few adjustments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We show in the supplementary material that P(∆H > δ) ≤ B(δ, D, C∗), where C∗ depends on the smoothness properties of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 5 Practical implementation So far, we have given theoretical results that describe how algorithms under various function classes behave under the ideal balance assumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' As noted earlier, the ideal balance definition is strict but permits theoretical characterization of various algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The question then naturally arises as to how to use the theoretical results from the previous sections in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note one can view the metric in equation (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) as a multivariate balance metric, which differ- entiates it from many other balance metrics in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2018) used (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1), where F is a reproducing kernel Hilbert space, as a covariate balance diagnostic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There, they found that in certain situations, the diagnostic was more sensitive in finding covariate imbalances relative to univariate diagnostics as well as those based on the prognostic score (Hansen, 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Consider the problem of estimating the average causal effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In practice, it is unlikely that ideal balance will hold for the treatment and control populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' That is to say, γF � Q0, Q1� ̸= 0, unless treatment is randomized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Therefore, we wouldn’t be able to use Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 in an observational study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, a slight modification can be done for which the analysis remains largely the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let w ∈ W ⊂ Rn0 be a weight vector and define Q0 w = 1 � i:Ti=0 wi � i:Ti=0 wiδXi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The majority of methods in causal inference have as a goal to find appropriate weights w for which Q0 w converges to Q∗ for some distribution Q∗ that indeed satisfies ideal balance with Q1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' That is, for which γF � Q∗, Q1� = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In order for this modification to be feasible, we just need to modify our proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 and include the convergence rates of Q0 w to Q∗, which may change depending on the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Having done so, we continue in a parallel manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let f ∗ ∈ F represent a matching procedure with balance diagnostic ∆ = ���� � fdQ0 w − � fdQ1 n1 ���� , then, by the definition of γF, ∆ ≤ γF � Q0 w, Q1 n1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Therefore, if we can find weights for which Q0 w converges to Q∗ and γF(Q∗, Q1) = 0, then we can bound the probability that ∆ exceeds some threshold δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There are many methods for finding w ∈ W, the most straightforward being the inverse proba- bility of treatment weights, wi = Ti + e(Zi)(1 − Ti) 1 − e(Zi) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Even heavily prescribed matching algorithms that are found throughout the causal inference litera- ture find some weights w ∈ W as described by Abadie and Imbens (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In one-to-one matching with replacement, let J (i) = {j1(i), j2(i), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='} be the set of indices of units that are matched with 10 the unit i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If there are no ties, then J (i) = j(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' With ties present, which occur fre- quently especially with exact matching (see coarsened exact matching), J (i) might contain multiple matched indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The matching process will allow us to produce weights for every unit by solving wi = � {l:Tl=1} I[i ∈ J (l)] #J (l) for all i ∈ {i : Ti = 0} where #J (i) denotes the cardinality of J (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 6 Simulation Studies We perform a simulation study to evaluate the distribution of the distances reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We also examine their downstream consequences for estimating average treatment effects on the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' There are two data generating mechanisms that we consider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In addition, we vary the sample size and the variance of the responses for a total of eight scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We replicate each of these scenarios, described below, over 1000 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We report the mean and Monte Carlo standard errors of the three distances (∆) examined in Section 4 (Table 1) along with the kernel density estimates for one representative scenario (Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We also evaluate the downstream effects of these ∆ statistics on the average treatment effect using one-to-one matching methods described by Abadie and Imbens (2006) implemented in the Matching package (Sekhon, 2008) (Tables 2 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , n, let Zi1 ∼ N(1, 4), Zi2 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3), Zi3 ∼ N(0, 1), and Zi4 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='5) where Ti denotes the binary treatment assignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The conditional means of the outcomes for the treated, µ1(Zi), and the controls, µ0(Zi), are constructed as µ0(Zi) = 10 − 3Zi1 − Zi2 + Zi3 + 3Zi4 and µ1(Zi) = µ0(Zi) + 5 + 3Zi1 − Zi2 + Zi3 − 3Zi4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1) We sample Ti ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='5) distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , n, we sample the counterfactual responses Yi(1) ∼ N[µ1(Zi), σ2] and Yi(0) ∼ N[µ0(Zi), σ2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The observed outcome is Yi = TiYi(1) + (1 − Ti)Yi(0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We will refer to these conditions with the label “baseline”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the error variance, we set σ2 ∈ {5, 10}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the scenario labeled “sparse”, we include an additional set of covariates that ultimately do not affect the outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The outcomes are determined by the potential outcome models in (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1), yet the methods we consider also account for the noise covariates Zi5 ∼ N(−1, 4), Zi6 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7), Zi7 ∼ N(0, 1), and Zi8 ∼ Bin(1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' As mentioned before, we test the three examples described in Section 4 in their ability to produce efficient, unbiased estimates of the average treatment effect of the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Linear discriminant analysis sets f to be the logit transformation of the fitted posterior probability that each unit receives treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The support vector machine examples use the distance that each point is from the resulting separating hyperplane assuming a linear kernel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Coarsened exact matching is performed similar to what is described in Iacus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2011) and is implemented with the cem R package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Table 1 shows the results of our simulation experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Since balance is already achieved through randomization in this simulation, we also report the unmatched, crude estimate of the average causal effect for references.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Here the value ∆ is the maximum absolute sample mean difference for the unweighted covariates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The values ∆ are not necessarily directly comparable in this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' They do represent the distributions whose tail probabilities we are bounding in theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The simulation serves to char- acterize some of the densities of these statistics so that we might better understand which values of δ are acceptable for the different balance methods in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We see that the values for ∆ after coarsened exact matching were the most heavily concentrated, followed closely by the values 11 n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='04) 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='15 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='13 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='12 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='15 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='07) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='13 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='08 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='06 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03) 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='04) 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='08 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='06 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03) 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='11 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='05) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='02 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='01) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='09 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='04) Table 1: Average and Monte Carlo standard error of ∆ found in the experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Since both A and B create a vector valued ∆ we report the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' generated by linear discriminant analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The balance diagnostics from a support vector machine and from an unweighted comparison yielded considerably more dispersed values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' One point of direct comparison that we may take between the different ∆ estimates is the downstream effects of the various balancing methods with estimating the average treatment effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The purpose of this portion of the simulation study shows how the concentration of the distribution for ∆ may have little to do with the actual quality of the average treatment effect estimates - the ultimate result for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Although the concentration of the distribution for ∆ under coarsened exact matching was the most narrow among the other densities found for ∆ under linear discriminant analysis and support vector machines, the estimated average treatment effect is also the most biased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Monte Carlo standard errors also seem to be greater than the other two balance methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Linear discriminant analysis also conferred a narrow concentration of ∆ statistics yet produced the most efficient estimates of the average treatment effect, other than from the unweighted estimate which had the smallest Monte Carlo standard errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This result is interesting because the unweighted diagnostics had the most dispersed values for ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This leads us to believe that the scale of the ∆ statistics must be carefully considered while evaluating balance to make some determination on which method is most suitable for evaluating treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='33) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='24 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='33) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='42) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='36) 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='34) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='29 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='45) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='39) 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='37) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='22 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='40) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='47) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='42) 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='35) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='31 (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='46) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='46) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='22 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='42) 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='24) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='29) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='25) 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='20 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='23) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='34 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='71) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='29) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='26) 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='25) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='26) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='19 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='32) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='28) 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='25) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='38 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='79) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='31) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='21 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='27) Table 2: Summary of simulation estimates and Monte Carlo standard errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The simulation sce- narios corresponding to ”baseline” and ”sparse” are described in further detail in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Here, θ refers to the population average treatment effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C is linear discriminant analysis, and Method D is support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 12 Figure 1: Kernel Densities of the ∆ balancing statistics for the baseline scenario with n = 1000 and σ2 = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The solid line is the distribution from the unweighted estimates, the dashed line is the distribution for coarsened exact matching, the dotted line is the distribution for the linear propensity score, and the dotted-dashed line for the support vector machine examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' n σ2 Scenario θ A B C D 1000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='952 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='937 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='941 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='929 1000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='944 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='955 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='934 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='917 1000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='941 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='935 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='912 1000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='955 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='951 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='931 2000 5 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='931 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='945 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='937 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='923 2000 5 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='956 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='945 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='939 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='918 2000 10 baseline 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='959 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='936 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='926 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='928 2000 10 sparse 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='953 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='946 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='948 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='935 Table 3: Summary of coverage probabilities from the simulation experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The simulation scenar- ios corresponding to ”baseline”, ”interaction”, ”positivity”, and ”sparse” are described in further detail in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Here, θ refers to the population average treatment effect among the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In this table, Method A is the unweighted estimate, Method B refers to coarsened exact matching, Method C to linear discriminant analysis, and Method D to support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Acknowledgments The authors would like to acknowledge funding support from the following sources: the National Institutes of Health, the National Science Foundation, the Veterans Administration and the Grohne- 13 Kernel Densities of Delta from a Monte-Carlo Simulation 8 4 8 Density 11, :t : 1 : 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='5 DeltaStepp Endowment from the University of Colorado Cancer Center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Appendix Proof of theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 We will use P and Q instead of Q0 and Q1 to ease symbolic burden on the reader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' By definition of γ: γ(Pn0, Qn1) = sup f∈F ���� � fdPn0 − � fdQn1 ���� = sup f∈F ���� � fdPn0 ± � fdP ± � fdQ − � fdQn1 ���� ≤ sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� + sup f∈F ���� � fdP − � fdQ ���� = sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� , since γ(P, Q) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Using elementary probability arguments, we have Pr{γ(Pn0, Qn1) > δ} = Pr � sup f∈F ���� � fdPn0 − � fdP − � fdQn1 + � fdQ ���� > δ � = Pr � sup f∈F ���� 1 √n0 GP n0(f) − 1 √n1 GQ n1(f) ���� > δ � ≤ Pr � sup f∈F |GP n0(f)| > √n0δ/2 � + Pr � sup f∈F |GQ n1(f)| > √n1δ/2 � , where GP n0(f) and GQ n1(f) represent the F-indexed empirical processes of P and Q, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Applying Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='9 in Van Der Vaart and Wellner (1996), we can bound each of the terms as follows: Pr � sup f∈F |GP n0(f)| > √n0δ/2 � < �D√n0δ 2 √ C �C exp(−n0δ2/2) Pr � sup f∈F ��GQ n1(f) �� > √n1δ/2 � < �D√n1δ 2 √ C �C exp(−n1δ2/2), where D is a constant depending only on K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Plugging these two bounds into (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='2) concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 14 Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Define γi = γFi(Pi, Qi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then: Pr �� i γi > δ � = 1 − Pr �� i γi < δ � ≤ 1 − Pr(γi < δ/d ∀i) = Pr(∃ i ∋ γi > δ/d) ≤ � i Pr(γi > δ/d) ≤ � i B(δ/d, Di, Ci), where we have used the union bound in the second inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Assume γFi(µ, ν) = 0 for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then γFπ(µ, ν) = sup f π∈Fπ ���� � f πdµ − � f πdν ���� = max ℓ sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� = max ℓ sup f∈F ���� � fℓdµ − � fℓdν ���� = max ℓ sup fℓ∈Fℓ ���� � fℓdµ − � fℓdν ���� = max ℓ γFℓ(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Conversely, assuming γFπ(µ, ν) = 0 yields γFi(µ, ν) = sup fℓ∈Fℓ ���� � fℓdµ − � fℓdν ���� = sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� ≤ max ℓ sup f∈F ���� � πℓ ◦ fdµ − � πℓ ◦ fdν ���� = γFπ(µ, ν) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This proves the first two equivalences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The third one is a byproduct of the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 15 Proof of Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='8 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' To avoid cumbersome notation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' let v = 1 n0 �n0 j=1 f ∗(X0 j ) − 1 n1 �n1 j=1 f ∗(X1 j ) and note vℓ = 1 n0 �n0 j=1 f ∗ ℓ (X0 j ) − 1 n1 �n1 j=1 f ∗ ℓ (X1 j ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' then: Pr � ∥v∥ℓp > δ � = Pr � ∥v∥p ℓp > δp� = Pr �� ℓ |vℓ|p > δp � ≤ Pr �� ℓ γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Q1 n1)p > δp � ≤ � ℓ Pr � γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Q1 n1)p > δp/d � = � ℓ Pr � γFℓ(Q0 n0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Q1 n1) > δ/d1/p� ≤ � ℓ B(δ/d1/p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' D∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' C∗) = dB(δ/d1/p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' D∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' C∗),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' where the second and third inequalities follow from a slight variation of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6 and application of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the ℓ∞ case we have: Pr � ∥v∥ℓ∞ > δ � ≤ Pr � max ℓ |γℓ| > δ � ≤ � ℓ B(δ, D∗, C∗), concluding the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Balance for coarsening functions We will show the coarsened exact matching procedure belongs to a class of functions with tractable Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Consider the set S of partitions with a fixed number of elements R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For a given partition S ∈ S, such that S = {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , sR} define f kα S to be: f kα S (x) = R � i=1 kiαiχsi(x), where ki ≤ k for k a constant, χsi is the indicator function of si, and α := (α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , αR) is a binary vector, this is, αi ∈ {0, 1} for each i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In words, if x is found in si, f will return a scaled version of x if αi is 1 and zero otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Now let F := {f kα S }S∈S,α∈A,k≤κ, where A is the set of all binary vectors of size R and κ ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Hence, the coarsened exact matching procedure belongs to this class of functions, since in that case αi indicates if there are at least two members of different groups in stratum si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For any sample point x, the weights are usually chosen in the following manner: If x is a treated unit, w1 i = 1, otherwise, w0 i = (ms 1/m1)/(ms 0/m0), where s is the stratum x belongs to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Letting ki = wℓ inℓ/mℓ appropriately weighs matched samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We just need to add the mild assumption that the ratio of sample to matched size per stratum s does not grow faster than √κ, that is, nℓ/ms ℓ ≤ √κ for all s ∈ S, because in that case w0 i ≤ m0/ms 0 ≤ n0/ms 0 ≤ √κ and nℓ/mℓ ≤ √κms ℓ/mℓ ≤ √κ, so ki ≤ κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, notice that any similar function with a smaller partition size can be expressed by a function in F, so we can consider variable partition size as long as it does not exceed a reasonable bound R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 16 For any set of points of size R there is a partition S containing one point in a different element, and therefore an α that can assign each point arbitrarily to either 0 or 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' So F shatters such set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' However, if we add an extra point, and since the number of partitions is constrained, it would have to share partition element with a previous point, and so assignment under f kα s .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' So the Vapnik- Chervonenkis dimension of F is R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, let g(Zℓ) = Qℓ nℓ, where Qℓ nℓ is the empirical distribution of the sample Zℓ for group ℓ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let k∗ be chosen as above and let (S∗, α∗) be the particular partition and binary vector used for coarsened exact matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then, for the ℓth component we get: ������ 1 m0 � i∈M0 w0 i Z0 i,ℓ − 1 m1 � j∈M1 w1 jZ1 j,ℓ ������ = ������ 1 n0 n0 � i=1 f k∗α∗ S∗,ℓ (Z0 i ) − 1 n1 n1 � j=1 f k∗α∗ S∗,ℓ (Z1 j ) ������ ≤ sup fℓ∈F∗ ������ 1 n0 n0 � i=1 fℓ(Z0 i ) − 1 n1 n1 � j=1 fℓ(Z1 j ) ������ = γF∗(Q0 n0, Q1 n1) = γF∗(g(Z0), g(Z1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thus, the discrepancy among the matched samples per dimension is bounded by the γF∗ distance of the unmatched samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, the function h(x) := κx is an envelope function of F and has norm ∥h∥L2(µ) < ∞ as long as we assume compact domain, which is OK to do for most coarsened exact matching cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then, by Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7 of Van Der Vaart and Wellner (1996): sup µ N(ϵ, F, L2(µ)) ≤ �K ϵ �C∗ , for some constant K and where C∗ = 2(R − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' This leads us to our final result: Assume ideal balance on the population probabilities holds for γFπ, then, for the ℓth component we have: Pr � � ������ 1 m0 � i∈M0 w0 i Z0 i,ℓ − 1 m1 � j∈M1 w1 jZ1 j,ℓ ������ > δ � � ≤ B(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' If we are interested in the ℓp norm of the full vector instead, then, by Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='8: Pr � � � � � ������ 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j ������ ℓp > δ � � � � � ≤ dB(δ/d1/p, D, C∗), for finite p ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' While Pr � � � ������ 1 m0 � i∈M0 w0 i Z0 i − 1 m1 � j∈M1 w1 jZ1 j ������ ℓ∞ > δ � � � ≤ dB(δ, D, C∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Balance using propensity scores Recall e(Z) = P(T = 1 | Z), and that we are assuming Z | T = ℓ ∼ N(µℓ, Σ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let pℓ be the probability density function of N(µℓ, Σ), that is, the gaussian density, then by the density version of Bayes’ Theorem we have p(T = 1 | Z = z) = p1P(T = 1) p1P(T = 1) + p0P(T = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 17 Therefore, we can express the logit of e(Z) as logit(e(Z)) = log � e(Z) 1 − e(Z) � = log �p1P(T = 1) p0P(T = 0) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Now define Lk := logit(e(Zk)), then the matching procedure is based on the difference |Li − Lj|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Given the above computation and after a few straightforward steps we get |Li − Lj| = ��(µ1 − µ0)T Σ−1(Zi − Zj) �� = |f ∗(Zi) − f ∗(Zj)| , where f ∗(x) = wT x for w ∈ Rp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Notice the vector w is the same as the one used for linear discriminant analysis so, adding an offset parameter, it will be useful to think of f ∗ as a hyperplane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let M j 0 be the control units assigned to treatment unit j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We make the assumption that there is a fixed number of assigned controls to each treatment, and so m0 = |M j 0|m1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='∆ := ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='logit(ej) − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='logit(ei) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Lj − ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Lj − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Lj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='|M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='− 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='Lj ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1|M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0| ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='− Li ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='(Lj − Li) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M j ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='(f ∗(Zj) − f ∗(Zi)) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='= ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='j∈M1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='f ∗(Zj) − 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='m0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='i∈M0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='f ∗(Zi) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' That is, we can express the difference of means of logits in terms of the difference of means of the discriminant functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let p be the dimension of the covariates, and let F be the collection of p-dimensional hyperplanes, notice f ∗ ∈ F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Vapnik-Chervonenkis dimension of F is known to be p + 1 (Mohri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' We would like to bound ∆ in terms of γ but we first need some adjustments to f ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The matching procedure determines a set ZM = {Zk | k ∈ M} of matched samples, where M = M0 ∪ M1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' By the Gaussian assumption the Zs are sampled from a Gaussian mixture so the probability of two sample points being the same is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Hence there is an ϵ > 0 such that for all 18 k ∈ M, Z ∩ Bϵ(Zk) = {Zk}, that is, each ϵ ball centered around a matched sample does not contain any other sample point (here Z is the sample set).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let Sϵ = ∪kBϵ(Zk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note Sϵ is a measurable set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Let βSϵ(x) := xχSϵ(x), this function maps points to zero if unmatched and to themselves if matched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Furthermore, let βℓ(x) := mℓ nℓ χMℓ(x) + χM C ℓ (x), for ℓ ∈ {0, 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Each βℓ scales elements in Mℓ by the factor mℓ nℓ and leaves the rest untouched.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Notice f ∗ M := f ∗ ◦ β1 ◦ β0 ◦ βSϵ sends Zk to mℓ nℓ wT Zk if k ∈ Mk and to 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then we can express ∆ as ∆ = ������ 1 m1 � j∈M1 f ∗(Zj) − 1 m0 � i∈M0 f ∗(Zi) ������ = ������ 1 n1 n1 � j=1 f ∗ M(Zj) − 1 n0 n0 � i=1 f ∗ M(Zi) ������ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Now, consider the set FM := {f ◦β1◦β0◦βS|f ∈ F, S ∈ Σ}, where Σ is the set of measurable sets according to the distribution of the Zs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Vapnik-Chervonenkis dimension for FM is the same as that of F, that is, p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' To see this we notice that the standard derivation for the hyperplane case involves shattering the standard basis B in Rp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' With probability one, no sample point will equal a standard basis vector, so there is an ϵ′ > 0 for which we can create a set s = ∪x∈BBϵ′(x) such that s ∈ Σ and no sample point is in s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Considering the functions {fν} in F used to shatter B and using s, we can use the functions {fν ◦ β1 ◦ β0 ◦ βs} in FM to also shatter B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' So the Vapnik-Chervonenkis dimension is at least p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Since the functions β1, β0, and βS are either zero or a scaled identity, we don’t get any complexity and the dimension is no larger than p + 1, so it is indeed p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the envelope function, we can choose h(x) =< we, x >.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The norm of we must be large enough to keep a p + 1 Vapnik-Chervonenkis dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Since the vectors used to ensure such a dimension have norm p + 1, the norm of we must be at least p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' So we can choose any large constant C > p + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Since we are interested in vectors of the form w = Σ−1∆µ, we have ∥w∥ ≤ ∥S−1∥F ∥∆µ∥2, so the user has to choose constants that bound each of these norms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Also, we must assume the covariates themselves are bounded, this ensures a finite norm for h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Finally, we have ∆ = ������ 1 n1 n1 � j=1 f ∗ M(Zj) − 1 n0 n0 � i=1 f ∗ M(Zi) ������ ≤ sup f∈FM ������ 1 n1 n1 � j=1 f(Zj) − 1 n0 n0 � i=0 f(Zi) ������ = γFM (Q0 n0, Q1 n1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Assuming Ideal Balance on the population probabilities, and applying Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='7 of Van Der Vaart and Wellner (1996) in conjunction with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3, yields Pr{∆ > δ} ≤ B(δ, D, 2p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Covering number bound for Reproducing Kernel Hilbert Spaces We refer the reader to Wahba (1990);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Berlinet and Thomas-Agnan (2011);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Steinwart and Christmann (2008) for nice overviews on reproducing kernel Hilbert spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Roughly speaking, a mapping k : X × X → R is said to be the reproducing kernel associated to the reproducing kernel Hilbert space H if it satisfies the following properties: (a) k(·, x) ∈ H for any x ∈ X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (b) f(x) = ⟨f, k(·, x)⟩H for all f ∈ H and x ∈ X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Property (b) is commonly referred to as the reproducing property.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 19 To apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 to the reproducing kernel case, we will need to directly bound the covering number based on arguments different from Vapnik-Chervonenkis theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Define the space Hm q (Rp) = {f ∈ Lq(Rp) | Djf ∈ Lq(Rp) ∀j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' , m};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' ∥f∥q < ∞}, where ∥f∥q = � 0≤|α|≤s ∥Dαf∥Lq and Dα denotes partial derivatives in the sense of distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Then as a consequence of Theorem 1 of Nickl and P¨otscher (2007), if m − q/p > 0, then N(ϵ, H, ∥ · ∥q) ≤ b1ϵ−q, while if m − q/p < 0, N(ϵ, H, ∥ · ∥q) ≤ b2ϵ−p/m, Based on this result, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='3 can then be applied to prove a convergence rate under ideal balance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Note that this does not cover the Gaussian kernel case, because the Gaussian kernel is infinitely differentiable, so the space Hm q (Rp) does not apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' For the reader interested in the Gaussian case, we refer them to the recent paper by Steinwart and Fischer (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' References Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imbens (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Large sample properties of matching estimators for average treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Econometrica 74(1), 235–267.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imbens (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Bias-corrected matching estimators for average treatment effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of Business & Economic Statistics 29(1), 1–11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Abadie, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imbens (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Matching on the estimated propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Economet- rica 84(2), 781–807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Baudat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Anouar (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Generalized discriminant analysis using a kernel approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Neural computation 12(10), 2385–2404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Berlinet, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thomas-Agnan (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Reproducing kernel Hilbert spaces in probability and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Yam, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zhang (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Globally efficient non-parametric inference of average treatment effects by empirical balancing calibration weighting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78(3), 673–700.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chervonenkis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Vapnik (1971).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Uniform convergence of the frequencies of occurrence of events to their probabilities(uniform convergence of frequencies of events in independent tests sequence to probabilities of occurrence).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Teoriia Veroiatnostei I Ee Primeneniia 16, 264–279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Hainmueller, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Political Analysis 20(1), 25–46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Hansen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The prognostic analogue of the propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Biometrika 95(2), 481–488.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Hazlett, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Kernel balancing: A flexible non-parametric weighting procedure for estimating causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Ho, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imai, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' King, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Stuart (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Political analysis 15(3), 199–236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 20 Holland, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Statistics and causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of the American statistical Associa- tion 81(396), 945–960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Iacus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' King, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Porro (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Multivariate matching methods that are monotonic imbalance bounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of the American Statistical Association 106(493), 345–361.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Ratkovic (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Covariate balancing propensity score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76(1), 243–263.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Imbens, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Causal inference in statistics, social, and biomedical sciences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Cambridge University Press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Kallus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Generalized optimal matching methods for causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of Machine Learning Research 21(62), 1–54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Kosorok, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Introduction to empirical processes and semiparametric inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Mohri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rostamizadeh, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Talwalkar (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Foundations of machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' MIT press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Neyman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1923).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Sur les applications de la th´eorie des probabilit´es aux experiences agricoles: Essai des principes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Roczniki Nauk Rolniczych 10, 1–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Nickl, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' P¨otscher (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Bracketing metric entropy rates and empirical central limit theorems for function classes of besov-and sobolev-type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of Theoretical Probability 20(2), 177–199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rosenbaum, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin (1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The central role of the propensity score in observational studies for causal effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Biometrika 70(1), 41–55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1974).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Estimating causal effects of treatments in randomized and nonrandomized studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of educational Psychology 66(5), 688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1976).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Multivariate matching methods that are equal percent bias reducing, i: Some examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Biometrics, 109–120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Stuart, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Affinely invariant matching methods with discriminant mixtures of proportional ellipsoidally symmetric distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Annals of Statistics 34(4), 1814–1826.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rubin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Thomas (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Affinely invariant matching methods with ellipsoidal distribu- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' The Annals of Statistics, 1079–1093.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Salimi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Suciu (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zaliql: A sql-based framework for drawing causal inference from big data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content='03540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Sekhon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Multivariate and propensity score matching software with automated balance optimization: the matching package for r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of Statistical Software, Forthcoming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Steinwart, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Christmann (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Support vector machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Steinwart, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Fischer (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A closer look at covering number bounds for gaussian kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of Complexity, 101513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Stuart, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Matching methods for causal inference: A review and a look forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Statistical science: a review journal of the Institute of Mathematical Statistics 25(1), 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 21 Van Der Vaart, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wellner (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Weak convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' In Weak convergence and empirical processes, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 16–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Springer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wahba, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Spline Models for Observational Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Society for Industrial and Applied Math- ematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Morucci, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Awan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Roy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Rudin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Volfovsky (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Flame: A fast large-scale almost matching exactly approach to causal inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zubizarreta (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Minimal dispersion approximately balancing weights: asymp- totic properties and practical considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Biometrika.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Wong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Chan (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Kernel-based covariate functional balancing for observational studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Biometrika 105(1), 199–213.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zhu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Savage, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Ghosh (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' A kernel-based metric for balance assessment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of causal inference 6(2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zolotarev, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (1984).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Probability metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Theory of Probability & Its Applications 28(2), 278–302.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Zubizarreta, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Stable weights that balance covariates for estimation with incomplete outcome data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' Journal of the American Statistical Association 110(511), 910–922.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} +page_content=' 22' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/b9AyT4oBgHgl3EQf-PqC/content/2301.00889v1.pdf'} diff --git a/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..fbe496b9039f101ae25d88840a350d73c9fef218 --- /dev/null +++ b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/2301.04972v1.pdf.txt @@ -0,0 +1,1474 @@ +Prepared for submission to JHEP +CERN-TH-2023-005 +Isospin Mass Differences of the B, D and K +Matthew Rowe,1 Roman Zwicky1,2 +1Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh, +Edinburgh EH9 3JZ, Scotland +2Theoretical Physics Department, CERN, Esplanade des Particules 1, +Geneva CH-1211, Switzerland +E-mail: m.j.rowe@sms.ed.ac.uk, roman.zwicky@ed.ac.uk +Abstract: We compute the electromagnetic mass difference for the B-, D- and K-mesons +using QCD sum rules with double dispersion relations. For the B- and D-mesons we also +compute the linear quark mass correction, whereas for the K the standard soft theorems +prove more powerful. The mass differences, which have not previously been computed via +a double dispersion, are fully consistent with experiment, albeit with large uncertainties. +Contents +1 +Introduction +1 +2 +Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules +3 +2.1 +B- and D-meson with Pseudoscalar Operators +3 +2.1.1 +Numerics +5 +2.2 +K-meson with Axial Operators +6 +3 +Linear Quark Mass Correction ∆mH|mq +8 +3.1 +QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D +8 +3.1.1 +Numerics +9 +3.2 +SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D +10 +3.3 +Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K +10 +4 +Final Overview and Conclusions +11 +A Variants of Quark-Hadron Duality +12 +A.1 Weight function ω(s) = s +13 +A.2 Weight function ω(s) = +1 +s−η +14 +B Numerical Input +14 +B.1 +Decay constants fB, fD and fK +14 +arXiv:2301.04972v1 [hep-ph] 12 Jan 2023 + +C Self Energies and Condensates for ∆mH|QED +14 +C.1 Perturbation theory +15 +C.2 Condensates +16 +D Some Classic Results +16 +D.1 Linear quark mass dependence from Feynman-Hellman theorem +16 +D.2 ∆mπ|QED from soft theorem and Weinberg sum rules +16 +1 +Introduction +The mass difference of charged and neutral hadrons, +∆mH = mH+ − mH0 , +H = B, D, K, π, p , +(1.1) +is an isospin breaking effect and has intrigued particle physicists from the very beginning. +In particular the proton-neutron [1] and the π+-π0 [2] mass difference have been discussed +extensively. At the microscopic level ∆mH is driven by differences in the electric charge +and the mass mq of the hadron’s light valence quark q = u, d +∆mB = ∆mB|QED + ∆mB|mq . +(1.2) +The sign and the size depends on the hadron in question and QED stands for quantum +electrodynamics.1,2 Recent lattice Monte Carlo simulations [3, 4] have verified this to a +high accuracy, for light and charm mesons, by computing both the charged and the neutral +mass and effectively using (1.1). +One may take a different approach and compute the two differences in (1.2) separately +by using the second order perturbation theory formula (with H = B for definiteness)3 +δmB|QED = +−iα +2mB(2π)3 +� +d4q T (B) +µν (q)∆µν(q) + O(α2) , +(1.3) +with +∆mB|QED ≡ δmB+|QED − δmB0|QED , +(1.4) +known in the current algebra era [7, 8]. Above ∆µν(q) = 1 +q2 (−gµν+(1−ξ) qµqν +q2 ) is the photon +propagator, α = e2/(4π) the fine structure constant and T (B) +µν (q) is the (uncontracted) +forward Compton scattering tensor, +T (B) +µν (q) = i +� +d4xe−iq·x⟨B|Tjµ(x)jν(0)|B⟩ , +(1.5) +1Strictly speaking the separation (1.2) is not well-defined as it requires fixing a (quark mass) renormal- +isation scheme e.g. [3]. In turn this is a reason for being interested in the problem as, especially light, +quark masses cannot be determined to high precision without folding in QED. This shows for example in +the D-meson results in comparison between [3] and [4]. For our purposes ∆mB|mq is as defined from (1.7). +2Effects due to the weak force are of O(Λ2 +QCD/m2 +W ) with respect to QED and are thus negligible. Similar +effects are relevant in the context of neutral meson mixing e.g [5, 6]. +3Note that in the literature the notation ∆m2 +B ≡ 2mB∆mB is also frequently used. +– 1 – + +with jα = � +q Qq¯qγαq, the electromagnetic current. +In 1963, Cottingham [9] improved this formula by parameterising it in terms of form +factors and relating it to structure functions. That is, by deforming the contour q0 → iq0 +and writing a dispersion representation, assessing the number of subtraction terms of the +form factors thus allowing him to write the contribution as an integral over Q2 = −q2 ≥ 0 +and ν = p · q/mB in the physical region. This opened the gate for many phenomenolog- +ical studies saturating the dispersion relation by a few terms beyond the elastic one and +using high energy constraints. This is a formidable task as one requires the knowledge of +a correlation function over the entire energy range akin to the situation of the vacuum po- +larisation for the anomalous magnetic moment. Some examples are for K, π [10, 11] using +chiral perturbation theory (and large Nc), for B and D [12, 13] using heavy quark theory +(and large Nc), for the proton-neutron [14] with updated fits to the structure functions and +an approach to B, D, K and π using vector meson dominance [15]. Another interesting +point, not unrelated, is that (1.3) requires renormalisation [16] and it was argued that it is +justified to cut-off the Q2-integral. Debates about subtraction terms are ongoing cf. [14] +and the response [17]. +Here we do not follow this phenomenological approach but evaluate (1.5) directly +in Minkowski space using double dispersion relation sum rules and thus determine the +mass differences from a unified framework (i.e. same hadronic input).4 +To the best of +our knowledge this has not been done previously with sum rules, presumably due to the +subtleties of non gauge-invariant interpolating currents [19, 20]. For example, in leptonic +decays this requires the introduction of a non-local interpolating operator (or an auxiliary +scalar field carrying the charge to infinity) for gauge invariance and reproduction of all +infrared sensitive logs [20]. However, in the case at hand this is not necessary, as verified +by explicit computation, since ∆mB is an infrared safe quantity. +An efficient and transparent way to implement the first order quark mass corrections +is to make use of the Feynman-Helmann theorem which gives +m2 +B|mq = +� +q +mq⟨B|¯qq|B⟩ , +(1.6) +as rederived in App. D.1. For the difference (1.1) this gives +∆mB +��mq = (mu − md) +2mB +⟨B|¯qq|B⟩ + O((mu − md)2) . +(1.7) +The matrix element ⟨B|¯qq|B⟩ can be evaluated in the isospin degenerate limit q = u = d +since we work to leading order (LO). For the B- and the D-meson we compute this matrix +element whereas for the Kaon and the pion a soft theorem ⟨π|¯qq|π⟩ = − 2 +f2π ⟨0|¯qq|0⟩ + +O(m2 +π/m2 +ρ), with fπ ≈ 131 MeV), due to their pseudo-Goldstone nature, proves more +effective. +In principle one could compute all the ∆mB|mq-effects with the QCD analogue of +(1.3) but this would be rather inefficient and we further comment in the relevant section. +4This function has been evaluated for the pion on the lattice with good agreement with experiment only +very recently using the infinite volume reconstruction method [18]. +– 2 – + +Another noteworthy aspect is that we were not able to obtain stable sum rules for the pion +(cf. Sec. 2.2). +The paper is organised as follows. In Sec. 2 the electromagnetic computation is pre- +sented, followed by the quark mass correction in Sec. 3. We give an overview of the results +and the conclusions in Sec. 4. Comments on quark hadron duality, the numerical input. +some (extra) computation and useful classic results are collected in Apps. A, C, B and D +respectively. +2 +Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules +The electromagnetic mass difference follows from the formula quoted in (1.3) and it is our +task to evaluate this. The main theoretical challenge is to incorporate the two hadrons for +which a non-perturbative method is needed. We use QCD sum rules [21] with a double +dispersion relation. The first step involves the adaption of an interpolating operator. For +the heavy mesons a pseudoscalar current is suitable and has proven to give good results +in many other contexts. For particles of light quark masses, and Goldstone particles in +particular [22], pseudoscalar interpolating operators are unsuitable as they are infested by +so-called direct instantons [23].5 We therefore discuss the heavy mesons and the K-meson +separately in Secs. 2.1 and 2.2 respectively. +An important criteria in assessing the validity of our sum rules is the so-called daughter +sum rule which we consider worthwhile to present now. In the simple single dispersion +relation case this criteria reads +m2 +B(s0, M2) = +� s0 +cut +e−s/M2ρ(s)sds/( +� s0 +cut +e−s/M2ρ(s)ds) , +(2.1) +where M2 is the Borel parameter, the “cut” marks the onset of physical states, ρ(s) = +rBδ(s−m2 +B)+. . . is the spectral density and the dots stand for states above the continuum +threshold s0. Formally, the residue rB drops out in the ratio. In practice ρ(s) is a continuous +function in partonic computations and Eq. (2.1) should be seen as a self-consistency criteria +for an s0 in the range of (mB + 2mπ)2 of (mB + 4mπ)2. If that is the case then Eq. (2.1) +can be used to fix the central value of s0. +2.1 +B- and D-meson with Pseudoscalar Operators +As motivated at the beginning of the section, the default choice for heavy-light 0− meson +interpolating operators are +JB = m+¯biγ5q , +ZB ≡ ⟨ ¯B|JB|0⟩ = m2 +BfB , +m+ ≡ (mb + mq) . +(2.2) +In determining (1.3), one of the main challenges, is that the momenta for the two B-meson +is degenerate. We bypass this problem by introducing an auxiliary momentum r into one +5For the heavy mesons axial interpolating operators are unsuitable because the 1+ states are relatively +low, e.g. for the JP = 0− B-meson with mB ≈ 5.28 GeV there is a 1+ B1(5721) with mB1 ≈ 5.72 GeV. +This is too close to the two pion threshold and even below the typical continuum threshold s0 ≈ (6 GeV)2 +assumed for the pseudoscalar operators. +– 3 – + +b +¯q +γ +Figure 1. Diagrams contributing to the correlation function in (2.3) with the double line repre- +senting the b-quark. (left) main diagram of the QbQq mixed type. (middle) b- and q-quark self +energies. (right) ⟨¯qq⟩-condensate part to b-quark self energy. There is no corresponding part for +the q-quark self energy since ⟨¯bb⟩ is negligibly small. For the mass difference only the first one is +relevant while the others are useful to obtain stable sum rules as described in the text. +of the currents and let it flow out at one of the two interpolating operators. Concretely we +start from +Γqq′(p2, ˜p2) = c i3 +� +x,y,z,q +ei(˜pz−ipy−(q+r)x)⟨0|TJ† +B(z)jµ(x)jν(0)JB(y)|0⟩∆µν(q)|QqQq′ += +� ∞ +0 +ds +� ∞ +0 +d˜s +ρΓqq′(s, ˜s) +(s − p2)(˜s − ˜p2) = +Z2 +Bδqq′mB +(m2 +B − p2)(m2 +B − ˜p2) + . . . , +(2.3) +with c ≡ +−iα +2mB(2π)3 , ˜p = p + r, shorthands xp = x · p, +� +q,x = +� +d4qd4x and the density is +given by +(2πi)2ρΓqq′(s, ˜s) = discs,˜s[Γqq′(s, ˜s)] , +(2.4) +the double discontinuity with further relevant explanations at the end of the section. The +quantity ∆qq′mB denotes the part proportional to the QqQq′-charges. Of course the aux- +iliary momentum r has to disappear from the final result. This is achieved by the on-shell +condition “˜p2 = p2” and is implemented in practice by treating them equally (p-˜p symme- +try) and requiring the daughter sum rule to be satisfied reasonably well. The QCD sum +rule is then given by +δqq′mB = +1 +Z2 +B +� ¯δ(a)(m2 ++) +m2 ++ +ds e +(m2 +B−s) +M2 +� ¯δ(a)(s) +m2 ++ +d˜s e +(m2 +B−˜s) +M2 +ρΓqq′(s, ˜s) , +(2.5) +where M2 is the Borel parameter from the Borel transformation and the ¯δ(a) is the contin- +uum threshold +¯δ(a)(s) = 21/aσ0 +� +1 − +� +s +21/aσ0 +�a�1/a +, +(2.6) +which is complicated for double dispersion sum rules [24]. Here it is implemented as in +[25] but simplified since the two hadrons are identical implying M2 → 2 ˆ +M2 and ˜s0 = ˜t0 = +σ(a) +0 21/a (allowing for elimination of those parameters). The number σ0 ≈ 35 GeV2 takes +on the rˆole of s0 in (2.1) and we shall use the notation s0 ≡ σ0 hereafter for reasons of +familiarity. The parameter a is a model-parameter and the independence of the result is a +measure of the quality of the result itself. +Let us turn to the computation. In perturbation theory there is the diagram connecting +the q- to the b-quark and the self energies. We focus on the former, as it is numerically +– 4 – + +dominant, and present the self energies and the condensate contribution in App. C. The +computation can be done analytically and we obtain the following compact result for the +density +ρΓbq = NcαQqQbm2 ++ +32π3mB +· +� +λ˜λ +s˜s +� +A + B +b ln +�a + b +a − b +�� +, +(2.7) +where +a = m2 +q − +1 +4 +√ +s˜s +� +s˜s + (m+m−)2� ++ +� +q ↔ b +� +, +b = 1 +2 +� +λ˜λ +s˜s , +A = m2 +− , +B = +� +Y ˜Y s˜s + 1 +2m2 +q +√ +s˜s(Y + ˜Y ) − 1 +4m2 +− +� +s + ˜s + 4mbmq + 2m2 +q +� +− 1 +4m2 ++ +√ +s˜s +� ++ +� +q ↔ b +� +, +with further abbreviations +m± = mb ± mq , +λ = λ(s, m2 +b, m2 +q) , +Y = s − m+m− +2s +, +(2.8) +λ(x, y, z) = x2 +y2 +z2 −2xy −2xz −2yz is the K¨all´en function and in the tilde quantities +˜Y and ˜λ we have s → ˜s. +A few words about the computation. We have taken the discontinuity in (2.4) using +Cutkosky rules. A crucial point is that we do not cut the photon propagator as this would +be a QED correction to the B-meson state and does not contribute to (1.3). This amends +the meaning of (2.4). +Let us turn to the usage of the auxiliary momentum r in the context of double dis- +persion sum rules. First we note that this is different to a form factor computation, e.g. +F π→π(q2) [26], where the momentum transfer naturally takes on the rˆole of this variable. +It is closer to ∆F = 2 matrix elements as there is no momentum transfer but the flavour +contractions naturally lead to a symmetric configuration (e.g. [27]) which is more straight- +forward. In fact since our procedure (2.3) artificially breaks the bq-symmetry, a and B turn +out to be non-symmetric whereas b and A remain symmetric. This has to be remedied by +the following substitution +a → 1 +2(a + a|b ↔ q) , +B → 1 +2(B + B|b ↔ q) , +(2.9) +which is apparent from the way the Cutkosky cuts work out. +We have performed the +computation in general gauge. Of course Γqq′ is gauge dependent but as stated earlier its +discontinuity in the bq-quark lines are not. This is the case since the particles are put +on the mass shell and it is important that the quantity is infrared safe. Otherwise, as +previously stated, one needs to introduce extra machinery [20]. +2.1.1 +Numerics +Our numerics have three cornerstones, the hadronic input parameters in Tab. 2, the daugh- +ter sum rule (2.1) and the choice of a mass scheme for mb. Whereas there is nothing to say +about point one, the others are in need of some explanation. We start with the B-meson +case. The daughter sum rule constrains the sum rule parameters: the continuum thresh- +old s0 and the Borel parameter M2. Additional constraints, defining the Borel window, +– 5 – + +are the convergence of the condensate expansion and keeping the B-pole term dominant +versus the continuum contribution [21]. Let us turn to the question of the mass scheme +which is not independent of the second point. We consider the pole-, the kinetic- and the +MS-scheme. In the pole scheme the b, c-quark self energy contributions (perturbative and +condensate, diagrams 2 and 4 in Fig. 1) vanish and the sum rules are not stable, that is +no Borel window, and we therefore discard it. For the MS-scheme the b-quark self energies +are dominant with the b-q contribution comparable to the condensates. Since these contri- +butions cancel in the observable ∆m, this scheme is not ideal either and we therefore drop +it. Hence we are left with the kinetic scheme for the b-quark which shows good properties +as for the B → γ form factor [28] and the gBB∗γ-couplings [25]. For the c-quark the self +energies are not dominant and we use the MS-scheme, also because the kinetic-scheme has +proven unsuitable in for gDD∗γ [25]. +As stated above the daughter sum rule (2.1) is used to fix s0. For that purpose it is +instructive to define the normalised ratio +U(s0, M2) ≡ +1 +m2 +B +· m2 +B(s0, M2) , +(2.10) +of the sum rule value over the experimental one which has to be close to unity for self- +consistency of the approach. This leads to +{s0, ˆ +M2}B = {35.2(1.0), 2.6(0.5)} GeV2 , +{s0, ˆ +M2}D = {5.5(1), 1.0(0.25)} GeV2 , (2.11) +for which +U(s0 ± 1 GeV2, M2)∆mB|QED = 1 ± 0.01 , +U(s0 ± 0.1 GeV2, M2)∆mD|QED = 1 ± 0.01 . +Using the input parameters in Tab. 2 (with mkin +b +(1 GeV), ¯mc( ¯mc)) and the fB,D sum +rule to LO (cf. App. B.1) for the ZB-factor we get +∆mB|QED = +1.58+0.26 +−0.23 MeV , +∆mD|QED = +2.25+0.89 +−0.52 MeV , +(2.12) +where the error is obtained by adding the individual errors in quadrature. The dominant +error is due to the heavy quark mass mb(c) (50-60%). The Borel mass M2 and duality +parameters a each contribute a 20-25% uncertainty. The error in a is quantified by taking +the standard deviation of the results with a ∈ [ 1 +2, 1, 2, ∞]. The errors for the D-meson are +larger reflecting the generically inferior quality of the sum rule. +2.2 +K-meson with Axial Operators +As explained at the beginning of this section pseudo Goldstone bosons cannot be interpo- +lated by pseudoscalar operators and one therefore resorts to axial ones +Aµ = ¯q γµγ5 s , +⟨0|Aµ|K(p)⟩ = ipµfK . +(2.13) +The correlation function corresponding to (2.3) assumes the form +Γαβ +qq′(p2, ˜p2) = ci3 +� +q +� +x,y,z +ei(˜pz−py−(q+r)x)⟨0|TAα(z)jµ(x)jν(0)A† β(y)|0⟩∆µν(q)|QqQ′q +– 6 – + += gαβΓ(0) +qq′ + pαpβΓ(2) +qq′ + O(r) . . . , +(2.14) +where the O(r)-terms are not of interest to us. The decisive information is in the pαpβ-term +which takes on the form +Γ(2) +qq′ = +f2 +Kδqq′m +(m2 +K − p2)(m2 +K − ˜p2) + . . . , +(2.15) +in a hadronic representation where the dots represent higher states in the spectrum (which +includes the K∗-meson in this case). +Let us turn to the computation which involves some practical matters. Computing +the double discontinuity of Γ(2) +qq′ is laborious as there are open Lorentz indices. One may +though obtain the same information from a linear combination of (2.3) and (2.14) with +contracted indices. It follows from Ward identities that (d = 4) +Γ(2)(s, s) = +1 +s2(1 − d) (sΓα +α(s, s) − d Γ(s, s))) , +(2.16) +where we omitted the qq′-subscript for brevity and have set s = ˜s. The generalisation to +the s ̸= ˜s is in principle ambiguous but fortunately the differences are not that sizeable. +Concretely we use +Γ(2)(s, ˜s) = +1 +s˜s(1 − d) +�1 +2(s + ˜s)Γα +α(s, ˜s) − d Γ(s, ˜s)) +� +, +(2.17) +and the analogous expression of (2.7) is lengthy for the Kaon and is given in a Mathematica +ancillary notebook attached to the arXiv version. +Changing the prescription (2.17) by 1 +2(s + ˜s) → +√ +s˜s results in a 15%-change which +is sizeable but not extremely large and well within the error. In addition we use a weight +function 1/s˜s as described in App. A.2 as otherwise the daughter sum rule is off by at least +a factor of two which is very large in view of how well it works in all other cases. +Proceeding as before we obtain the following values +{s0, ˆ +M2}K = {0.7(1), 0.95(0.5)} GeV2 , +U(s0 ± 0.1, M2)∆mK|QED = 1.00 ± 0.10 , (2.18) +for the sum rule parameters and the daughter sum rule (2.10). Using the input parameters +in Tab. 2, the fK sum rule to LO (cf. App. B.1) and (2.18) we get +∆mK|QED = +1.85+0.42 +−0.66 MeV . +(2.19) +Scale dependent quantities are evaluated at µ = 2 GeV. The uncertainty again comes from +adding individual errors in quadrature. The dominant uncertainty (75%) comes from the +ms mass with the remaining uncertainty due to the the duality parameter a in (2.6). +As stated in the introduction, the pion proved more difficult. That is we were not able +to find stable sum rules satisfying the daughter sum rule for reasonable values of the con- +tinuum threshold.6 We believe that is due to its small mass mπ which is considerably below +the other hadronic masses. Conversely the Kaon mass, while being a pseudo-Goldstone, is +much closer to the other hadrons (due to ms being close to ΛQCD). +6The extra disconnected diagram for the π0, e.g. [18], is small since the γ5 generates a Levi-Civita tensor +which enforces two extra loops. This is reflected in the smallness of the lattice result [18] and also by the +fact that the LO chiral Lagrangian does not contribute to π0 (cf. App. D.2). +– 7 – + +3 +Linear Quark Mass Correction ∆mH|mq +As stated in the introduction (and cf. App. D.1) the O(mq)-corrections are governed by +⟨H|¯qq|H⟩ (1.7). For the B, D-meson we compute this matrix element from QCD sum rules +in Sec. 3.1, using similar techniques as for the QED correction, and for light mesons we +resort to soft theorems cf. Sec. 3.3 as the corresponding sum rules are inferior. +3.1 +QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D +In order to anticipate the hierarchy of diagrams shown in Fig. 2 it is worthwhile to con- +template on the heavy quark behaviour. +The matrix element scales like (H = B) for +definiteness). +⟨B|¯qq|B⟩ = O(mb) , +(3.1) +for relativistically normalised states, ⟨B(p)|B(q)⟩ = 2EB(⃗p)(2π)3δ(3)(⃗p − ⃗q), due to the +factor EB = O(mb). On the one hand, the operator ¯qq demands a chirality flip in pertur- +bation theory and this cannot come from the mb-mass since the latter is entirely kinematic +as we have just established. On the other hand the condensate contribution itself ⟨¯qq⟩ does +not require this flip and is therefore unsuppressed and numerically leading. +b +¯q +g +Figure 2. Diagrams contributing to the matrix element ⟨B|¯qq|B⟩. They are analogous to the +ones in Fig. 1 but the square blob denotes the insertion of the ¯qq-operator. Perturbation theory +is minimal and the quark condensate diagram is the main contribution. The mixed condensate +diagrams ⟨¯qGq⟩ are mainly useful to stabilise the sum rule. +To do the computation we start from the following correlation function +Π(p2, ˜p2, r) = i2 +� +y,z +ei(˜pz−py−xr)⟨0|TJ† +B(z)(¯qq)(x)JB(y)|0⟩ , +(3.2) +where JB has been defined in (2.2) and the auxiliary momentum r takes on the same rˆole +as before. The double dispersion relation of the correlation functions reads +Π(p2, ˜p2, r) = +� +dsd˜s ρΠ(s, ˜s) +(s − p2 − i0)(˜s − ˜p2 − i0) = +Z2 +B⟨ ¯B|¯qq| ¯B⟩ +(m2 +B − p2)(m2 +B − ˜p2) + . . . . +(3.3) +with (2πi)2ρΠ(s, ˜s) = discs,˜s[Π(s, ˜s)], and the matrix element is then given by +⟨ ¯B|¯qq| ¯B⟩ = +1 +Z2 +B +� ¯δ(a)(m2 ++) +m2 ++ +ds e +(m2 +B−s) +M2 +� ¯δ(a)(s) +m2 ++ +d˜s e +(m2 +B−˜s) +M2 +ρΠ(s, ˜s) , +(3.4) +with ¯δ(a) defined in (2.6). The three contributions depicted in Fig. 2 are described below. +– 8 – + +• Perturbation theory is given by +ρΠ(s, ˜s) = m2 ++Ncmq +2π2 +s − (mb − mq)2 +s + m2q − m2 +b +λ +1 +2 δ(˜s − s) , +(3.5) +with the anticipated O(mq)-suppression. This term is negligible. +• The ⟨¯qq⟩ condensate evaluates to +⟨ ¯B|¯qq| ¯B⟩ = −4m2 ++m2 +b⟨¯qq⟩ +Z2 +B +e +2(m2 +B−m2 +b) +M2 +, +(3.6) +which is not suppressed by O(mq) and thus dominant. +• The mixed condensate yields +⟨ ¯B|¯qq| ¯B⟩ = − m2 ++⟨¯qσsggGq⟩ +Z2 +B +e +2(m2 +B−m2 +b) +M2 +� +(1 − 3m2 +b +M2 ) + (5 +8 + 2m2 +b +M2 − 4m4 +b +M4 ) +� +, (3.7) +which is not suppressed either as it is in the same chirality representation as the +quark condensate. The first and second term in round brackets are from the third +and fourth diagram in Fig. 2. +We consider it worthwhile to comment how the lack of mq-suppression in the condensate +contribution arises. Its origin is the propagator 1/(r2 − m2 +q + iϵ) (we work in the ⃗r = 0 +frame) +r2 − m2 +q + iϵ = (√s − ( +√ +˜s + mq − iϵ′))(√s − ( +√ +˜s − mq + iϵ′)) , +(3.8) +which when cut gives a term of the form +√s +mq δ(s − ( +√ +˜s + mq)2). The 1/mq thus removes +the O(mq)-suppression in the numerator. Numerically perturbation is entirely negligible +and this is also the reason for not including the gluon condensate which is expected to be +further suppressed O(Λ4 +QCD/M 4) as compared to perturbation theory. +3.1.1 +Numerics +The basic procedure for the numerics is the same as described in Sec. 2.1.1. However, +the choice of scheme is not as important in this case. Any of the schemes, pole, kinetic +and MS give similar results and indicate stability. The situation is certainly clearer with +respect to the mb-mass itself as the matrix element is O(mb) (3.1) and ∆mB|mq itself is +O(m0 +b) whereas ∆mB|QED is computed from a non-local correlation function where the mb- +dependence is more difficult to track. Since the perturbative contribution is suppressed, +there is no s0 dependence (there would be at NLO in αs). Hence we can fix the Borel value +M2 to satisfy the daughter sum rule (2.10), obtaining the following sum rule parameters +{s0, ˆ +M2}B = {35.0, 4.0} GeV2 , +{s0, ˆ +M2}D = {6.0, 0.75} GeV2 , +(3.9) +and daughter sum rules +U(s0, ˆ +M2 ± 0.15 GeV)∆mB|mq = 1.00+0.03 +−0.02 , +– 9 – + +U(s0, ˆ +M2 ± 0.05 GeV)∆mD|mq = 1.00+0.20 +−0.12 . +(3.10) +Using the input parameters in Tab. 2 (with mkin +b +(1 GeV), ¯mc( ¯mc)), the fB,D sum rule to +LO (cf. App. B.1) and (3.9) we get +⟨ ¯B|¯qq| ¯B⟩µ=1 GeV = 5.99+1.99 +−1.41 GeV , +⟨ ¯D|¯qq| ¯D⟩µ= ¯mc GeV = 3.40+1.78 +−1.71 GeV , +(3.11) +for the matrix elements and +∆mB|mq = −1.88+0.49 +−0.71 MeV , +∆mD|mq = +2.68+1.48 +−1.38 MeV , +(3.12) +for the mass differences. +As this is a LO computation the errors are large, primarily coming from M2 with a +small contribution (20%) from the light quark masses. Note that the set value of M2 is not +independent of higher order αs corrections. For the D-meson especially, the convergence of +the sum rule is not good. This is reflected in the mixed condensate contributing a sizeable +20%-uncertainty. +3.2 +SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D +Alternatively, one may use SU(3)F flavour symmetry ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩ to estimate +⟨B|¯qq|B⟩ [12]. Following this analysis one may write (mud ≡ 1 +2(mu + md)) +(2m2 +Bs − m2 +B+ − m2 +B0) = 2(ms − mud)⟨B|¯qq|B⟩ , +(3.13) +from which +⟨B|¯qq|B⟩ ≈ m2 +Bs − m2 +B +(ms − mud) , +(3.14) +follows. Employing the input from the PDG [29] this leads to7 +∆mB|mq = −2.37+0.35 +−0.43 ± 20%SU3 MeV , +∆mD|mq = +2.81+0.51 +−0.41 ± 20%SU3 MeV . (3.16) +We have added a characteristic 20% SU(3)F -violation due to the use of the ⟨B|¯qq|B⟩ ≈ +⟨Bs|¯ss|Bs⟩. The result are well compatible with (3.12) and we shall not use them any +further. Note that in the heavy quark limit we have ∆mB|mq = −∆mD|mq since the c +and b are up and down quark types respectively. This heavy quark limit relation holds +reasonably as already observed in [12] (with slightly different input). +3.3 +Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K +The matrix elements ⟨L|¯qq|L⟩ where L = π, K is a pseudo-Goldstone boson may be es- +timated using soft-pion techniques which in this case lead to the famous GMOR-relation +[31]. Concretely [32] +m2 +π+,0 = (mu + md)B0 , +m2 +K+ = (mu + ms)B0 , +m2 +K0 = (md + ms)B0 , +(3.17) +7Or taking the η → 3π analysis [30], which in this case makes a difference, results in +∆mB|mq = −2.54+0.17 +−0.18 ± 20%SU3 MeV , +∆mD|mq = +3.01+0.21 +−0.20 ± 20%SU3 MeV , +(3.15) +a more precise result. +– 10 – + +which are to first order in the quark masses, with no QED corrections and the constant is +B0 = − 2⟨¯qq⟩ +f2π +≈ 2.26 GeV at µ = 2 GeV. We see that for the pions there is no difference to +linear order which is a consequence of isospin [10]. The pion mass splitting is a ∆I = 2 +isospin effect since the relevant matrix element has two pion states where the quark masses +themselves are of ∆I = 1. Hence it takes at least two powers of the quark mass difference. +Fortunately, the latter follows in a straightforward manner from chiral perturbation theory +and one obtains to LO +∆mK|mq = mu − md +ms − mud +m2 +K − m2 +π +2mK += mu − md +2mud +m2 +π +2mK += − 6.74+0.98 +−1.21 MeV , +∆mπ|mq = 1 +16 +md − mu +ms − mud +md − mu +mud +mπ += + 0.16+0.06 +−0.05 MeV , +(3.18) +using the values from the PDG [29]. As expected the pion contribution is rather small +as a result of being second order in the quark mass difference. It is noteworthy that one +obtains ∆mK|mq ≈ −5.7 MeV when using (3.17) directly which can be seen as a SU(3)F +correction which is well covered by the quoted uncertainty. +4 +Final Overview and Conclusions +In this paper we have computed the mass difference of the charged and neutral B-, D- +and K-mesons. The results, which originate from electromagnetic and quark mass effects, +are summarised and contrasted with experimental values in Tab. 1. The electromagnetic +contribution is computed from the second order formula (1.3) in Sec. 2 and may be regarded +as the core part of this paper. ∆mπ|QED is taken from a soft-pion theorem (cf. App. D.2) +for completeness and comparison. Quark mass effects are obtained from the Feynman- +Hellman formula (1.7) and its corresponding matrix element is computed in Sec. 3.1 for +the B and the D respectively whereas for the K and the π a soft theorem turns out to be +more reliable. +The results obtained are consistent with the current experimental values. The uncer- +tainties are above 20% and indeed more cannot be expected from a double dispersion sum +rule at leading order in the strong coupling constant. Experimental uncertainties are one +or two orders of magnitude lower. +The values in Tab. 1 deserves some comments as they are not easily guessed by rules of +thumb by a practitioner in non-perturbative QCD. The parametric estimate of ∆mH|QED = +c Qeff +H +α +πΛQCD with ΛQCD = 200 MeV and Qeff +D = 2Qeff +B,K = 2/3, leads to c ≈ 10-20 which is a +rather large number. To put this into perspective, one should keep in mind that these kind +of estimates are not straightforward as the mass difference is obtained from a non-local +(long distance) correlation function (1.3). The scale for the quark mass effect is of course +set me mu−md ≈ 2.5 MeV and its sign depends on whether the non q = u, d quark is of the +up (charm) or down (beauty, strange) type quark. The cancellation to almost an order of +magnitude of the electric and the quark mass contribution for the B-meson is remarkable, +leading to an inflated uncertainty in ∆mB. +The main aim of this paper was to show that it is possible to understand the isospin +mass difference from QCD sum rules, that is to obtain values compatible with experiment. +– 11 – + +H +∆mH|QED +∆mH|mq +∆mH +∆mH|PDG[29] +B ++1.58(24) MeV +−1.88(60) MeV a +−0.30(65) MeV +−0.32(5) MeV +D ++2.25(70) MeV ++2.7(1.4) MeV a ++4.9(1.6) MeV ++4.822(15) MeV +K ++1.85(54) MeV +− 6.7(1.1) MeV b +−4.9(1.2) MeV +−3.934(20) MeV +π ++4.8(1.2) MeV c ++0.16(5) MeV b ++5.0(1.2) MeV ++4.5936(5) MeV +Table 1. Our values of ∆mH due to the electromagnetic mass difference and the quark masses +compared to the PDG values. The entries marked with a are obtained from the ⟨H|¯qq|H⟩ matrix +element in conjunction with the Feynman-Hellman theorem (valid to LO in mq). The values in italic +should not be regarded as predictions of this work. E.g. bderived from the soft theorem for (pseudo-) +Goldstone bosons (cf. App. 3.3) and cresults from soft theorem in conjunction with the Weinberg sum +rules (cf. App. D.2). It is noteworthy that ∆mπ|mq = O((mu − md)2) which explains its smallness. +For comparison some lattice values ∆mD = 5.47(53) MeV and ∆mK = −4.07(15)(15) MeV [4] and +∆mD = 4.68(10)(13) MeV [3] which are of course more precise as the lattice is suited for mass +determination, even in the presence of QED, and due to the full inclusion of QCD. +The sum rule computation could be improved by including radiative corrections in the +strong coupling constant which would be a formidable task. Perhaps more interestingly, +the formalism developed in this paper could be applied to baryons to obtain the proton- +neutron mass difference for instance. +Acknowledgments +RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1. +We are grateful to Michele Della Morte, Antonin Portelli and Max Hanson for informative +comments on the lattice literature. +A +Variants of Quark-Hadron Duality +In this appendix we elaborate on variations of quark-hadron duality. This is best explained +by example. Consider the axial correlator in connection with the K +Παβ = i +� +d4xeipx⟨0|TA† +α(x)Aβ(0)|0⟩ = pαpβΠ(p2) + gαβ ˆΠ(p2) , +(A.1) +with Aβ defined in (2.13). The Kaon appears in the first structure +Π(p2) = +f2 +K +m2 +K − p2 + . . . , +(A.2) +where the dots stand for higher states as usual. QCD sum rules consists of two steps. +Firstly the observation that +Π(p2) ≈ Π(p2)pQCD , +(A.3) +for some p2 outside the physical region (could be p2 < 0), where pQCD stands for per- +turbative QCD with OPE improvements. In a second step one rewrites Eq. (A.3) as a +– 12 – + +dispersion relation followed by a Borel transform under which (s − p2)−1 → exp +� +−s/M2� +(M2 is the Borel parameter) which results in +� ∞ +0 +e−s/M2ρ(s) ≈ +� ∞ +0 +e−s/M2ρpQCD(s) , +(A.4) +with ρ(s) = +1 +2πidiscsΠ(s) = f2 +Kδ(s − m2 +K) + . . . and the pQCD part is defined analogously. +The one assumption is then that this integral can be broken up as follows +� s0 +0 +e−s/M2ρ(s) ≈ +� s0 +0 +e−s/M2ρpQCD(s) , +(A.5) +and (A.5) is sometimes referred to as semi-global quark hadron duality [33]. One way to +determine s0 is to impose the daughter sum rule (2.1) and then for consistency with the +duality assumption s0 ought to be somewhere between (mK + 2mπ)2 and (mK + 4mπ)2. +We want to briefly contemplate for which types of weight functions ω(s) (A.5) +� s0 +0 +e−s/M2ρ(s)ω(s) ≈ +� s0 +0 +e−s/M2ρpQCD(s)ω(s) , +(A.6) +with corresponding (2.1) +m2 +B = +� s0 +cut +e−s/M2ρpQCD(s)ω(s) s ds/( +� s0 +cut +e−s/M2ρpQCD(s)ω(s)ds) , +(A.7) +can hold. The crucial point is to be able to justify the analogue of Eq. (A.3). +A.1 +Weight function ω(s) = s +We might start by rewriting the pαpβ-part in (A.1) as follows +pαpβΠ(p2) = pαpβ +p2 (p2Π(p2)) . +(A.8) +For the pQCD part one may directly write ρpQCD(s) → sρpQCD(s) since p2 does not lead +to new singularities. Using (A.2), the QCD part can be written as +(p2Π(p2)) = p2 +f2 +K +m2 +K − p2 + · · · = −f2 +K + m2 +K +f2 +K +m2 +K − p2 + . . . , +(A.9) +where −f2 +K is a constant that will disappear under Borel transformation and thus ρ(s) → +sρ(s) works the very same way. The analogue of (A.3) can be justified in this case by re- +placing A† +α(x) → −∂2A† +α(x) (A.1).8 Weight functions of polynomials are generally referred +to as moments and are familiar to the community e.g. moments in b → cℓν for example +[34]. It is quite clear that one can not take arbitrarily high powers of moments as then +duality will be challenged since smoothness is lost. +8In our case this is not trivial as A† +α is not QED gauge invariant but it can still be used at LO. In the +general case this requires more thought. +– 13 – + +A.2 +Weight function ω(s) = +1 +s−η +Choosing a weight function +ω(s) = +1 +s − η , +(A.10) +is equivalent to working with a subtracted dispersion relation fo the form +Π(p2) − Π(η) +p2 − η += +� +dsρ(s) +(s − p2)(s − η) + c , +(A.11) +where c = − +� +dsρA(s)/(s(s − η)) + Π +′(η) is a subtraction constant such that the limit +p2 → 0 comes out correctly. The constant c is though not important in the end as it +vanishes under Borel transformation. The question of whether one can use (A.10) then +turns into the question whether the left hand side can be computed reliably. +In our application to Kaons we have chosen η = 0 which is close but still below the +Kaon resonance. +We have checked that for the fK sum rule with s0 = 0.7 GeV2 the +agreement is reasonable and this serves at least as a partial justification of the procedure +in Sec. 2.2. +B +Numerical Input +The numerical QCD input is summarised in Tab. 2 and below we give the numerical values +of the the decay constant from sum rule which are the effective LSZ factors. +B.1 +Decay constants fB, fD and fK +The extraction of both the QED mass shifts and the linear quark mass corrections, require +values for the decay constants fB, fD and fK. Note that, for consistency with the rest of +this paper these are evaluated at LO in QCD. The LO expressions for the pseudoscalar +(B, D) and axial (K) correlators are well known (e.g. [38, 39]). The following values +fB = 0.157 GeV , +{s0, M2} = {33.5, 6.0} GeV2 , +fD = 0.158 GeV , +{s0, M2} = {5.7, 2.0} GeV2 , +fK = 0.147 GeV , +{s0, M2} = {1.1, 1.5} GeV2 , +(B.1) +are obtained. +C +Self Energies and Condensates for ∆mH|QED +In this appendix we present some extra computations: the self energies and condensate +contributions to ∆mB|QED. These are important for stabilising the sum rules but do not +affect the actual value of ∆mB|QED per se. This is the case since graphs proportional to +Q2 +b are cancelled in the mass difference. The only non-zero graph contributing to the mass +shift is the q-q self energy, but it is numerically negligible. We wish to note that in all these +graphs explicit gauge independence has been verified to hold after the double-cut is taken. +– 14 – + +JP = 0− Meson masses [29] +mB +mBs +mD +mDs +mK +mπ +5.280 GeV +5.367 GeV +1.867 GeV +1.968 GeV +0.496 GeV +0.137 GeV +JP = 0− Mass Differences [29] +∆mB +∆mD +∆mK +∆mπ +−0.32(5) MeV ++4.822(15) MeV +−3.934(20) MeV ++4.5936(5) MeV +Quark masses [29] +¯mb(mb) +¯mc(mc) +mpole +b +mpole +c +mkin +b +|1GeV +mkin +c +|1GeV +4.18+0.03 +−0.02 GeV +1.27(2) GeV +4.78(6) GeV +1.67(7) GeV +4.53(6) GeV +1.13(5) +¯ms|2GeV +¯md|2GeV +¯mu|2GeV +¯mud|2GeV +¯mu +¯md +¯ms +¯mud +93.4+8.6 +−3.4 MeV +4.67+0.48 +−0.17 MeV +2.16+0.49 +−0.26 MeV +3.45+0.35 +−0.15 MeV +0.474+0.056 +−0.074 +27.33+0.67 +−0.77 +Condensates +⟨¯qq⟩|2GeV [35] +⟨¯ss⟩|2GeV [36] +m2 +0 [37] +⟨0| α +πG2|0⟩ [21] +−(269(2) MeV)3 +1.08(16) ⟨¯qq⟩ +0.8(2) GeV2 +0.012(4) GeV4 +Table 2. Summary of input parameters. Note as inputs into the sum rules we use mH = mH−, +as which has a completely negligible impact. The quantity mud ≡ 1 +2(mu + md) is the light quark +average. The mixed condensate is parameterised as ⟨¯qσsggGq⟩ = m2 +0⟨¯qq⟩ as is standard in the +literature. +C.1 +Perturbation theory +The perturbative b-b self energy graph, after mass renormalisation, takes on the form +ρΓbb(s, ˜s) = Ncm2 ++Q2 +bα +32π3mB +· λ +1 +2 · +s − m2 +− +s + m+m− +fR(m2 +b)δ(˜s − s) , +(C.1) +with the renormalised fR9 +fR(m2) = f(m2) + 32π2m2 +e2 +δZm = +� +� +� +� +� +� +� +� +� +� +� +2m2 +� +4 + 3 ln µ2 +m2 +� +, +MS +0, +Pole +2m2 +� +16µ +3m + 2µ2 +m2 +� +, +Kinetic +(C.2) +f(m2) = 4m2B0(m2, 0, m2) + (d − 2)A0(m2) . +(C.3) +The functions A0 and B0 are the standard Passarino-Veltman functions with (FeynCalc) +normalisation (2πµ)2ϵ � +ddk /(iπ2). Explicitly these are +B0(m2, 0, m2) = 1 +ˆϵ + 2 + log +� µ2 +m2 +� +, +A0(m2) = m2 +�1 +ˆϵ + 1 + log +� µ2 +m2 +�� +, +(C.4) +with 1 +ˆϵ = 1 +ϵ − γE + log 4π. The q-q graph can be obtained by replacing b → q in the result +and since it is O(m2 +q) it is negligible. +9Note that the vanishing in the pole scheme is clear, by the very definition of the scheme, since we are +on-shell after the cuts. +– 15 – + +C.2 +Condensates +The only relevant condensate graph is given in Fig. 1 (4th diagram). With mq → 0 the +density is +ρ⟨¯qq⟩ +Γbb = −m2 +bαQ2 +b +8πmB +mb⟨¯qq⟩δ(s − m2 +b)δ(˜s − m2 +b)fR(m2 +b) . +(C.5) +Light quark mass corrections come from Taylor expanding the quark fields, leading to +derivatives of δ-functions. It is thus more convenient to directly display the resulting mass +shift +∆mB|⟨¯qq⟩ = − m2 ++αQ2 +b +8πmBZ2 +B +e +2(m2 +B−m2 +b) +M2 +⟨¯qq⟩ +� +mb − mq +4 +� +1 + 4m2 +b +M2 +�� +fR(m2 +b) +(C.6) +The ⟨¯qq⟩ condensate graph where the photon connects the b and the q-quark is not of +short distance type (it leads to 1/m2 +q in the propagator) and is therefore omitted. This +is similar to the B → γ form factor although in that case the physics is covered by the +photon distribution amplitude (e.g. [28]). +D +Some Classic Results +In this appendix we summarise some classic results which are of use and referred to in the +paper. +D.1 +Linear quark mass dependence from Feynman-Hellman theorem +In order to derive the Feynman-Hellman theorem it is convenient to use states ⟨ ˆB(p)| ˆB(q)⟩ = +(2π)3δ(3)(⃗p−⃗q) normalised in a non-relativistic manner (the translation to the usual states +is | ˆB⟩ = |B⟩/√2EB). Taking the derivative of ⟨ ˆB|H| ˆB⟩ (using ∂mq⟨ ˆB(p)| ˆB(q)⟩ = 0) one +obtains +mq∂mqEB = mq⟨ ˆB|¯qq| ˆB⟩ , +(D.1) +which is equivalent to +mq∂mq2E2 +B = 2mq⟨B|¯qq|B⟩ , +(D.2) +which in turn is consistent with +m2 +B|mq = +� +q +mq⟨B|¯qq|B⟩ , +(D.3) +since the momenta are independent of the mass. This is the relation quoted in (1.6) in the +main text. +D.2 +∆mπ|QED from soft theorem and Weinberg sum rules +Using soft-pion techniques it was shown that [2] +∆mπ|QED = +3α +8πmπf2π +� ∞ +0 +dss ln µ2 +s (ρV (s) − ρA(s)) + O(m2 +π/m2 +ρ) , +(D.4) +where ρV = fρδ(s − m2 +ρ) + . . . is the spectral density of the vector triplet current and ρA +is the analogous quantity for the axial case. The ln s-term originates from integrating over +– 16 – + +the photon momentum d4q. We refer the reader to [10] for an improved treatment using +chiral perturbation theory. In fact, as is the case for all soft-pion results, Eq. (D.4) follows +from the LO electromagnetic term in the Lagrangian and can therefore be systematically +improved beyond the soft limit to the extent that its low energy constants (i.e. couplings) +are known. Using the Weinberg sum rules [40], which are phenomenologically successful, +a good estimate was obtained [2]. Taking the equations resulting from the so-called first +and second Weinberg sum rule in [41], then +f2 +ρ = f2 +a1 + f2 +π , +m2 +ρf2 +ρ = m2 +a1f2 +a1 , +(D.5) +(where the chiral limit mq = 0 is assumed). Moreover, the spectral functions are trun- +cated after the first vector meson resonances ρ and a1 which can be justified as the chiral +symmetry is restored at high energy. Using these in expressions in (D.4) one gets +∆mπ|QED = 3α +8π +m2 +ρf2 +ρ +m2πf2π +mπ ln +f2 +ρ +f2ρ − f2π +≈ 4.8 MeV , +(D.6) +for fπ = 131 MeV, mρ = 0.77 MeV [29] and fρ = 215 MeV [42]. Since the quark mass +effect is small O((mu −md)2) (3.18), one has ∆mπ ≈ ∆mπ|QED which is rather close to the +experimental value ∆mπ = +4.5936(5) MeV [29]. Clearly (D.6) is a crude approximation +as more detailed analyses [10, 43] including finite width effects yields a result which is ca ++1.2 MeV larger [43]. We therefore assign an uncertainty of this amount to ∆mπ|QED in +Tab. 1. +It is also worthwhile to mention two other interesting aspects in conjunction with +∆mπ|QED. First, by using by using QCD inequalities it has been shown that ∆mπ|QED ≥ 0 +[44] which is of course well satisfied. Second Dashen’s theorem [45] states that ∆m2 +π|QED − +∆m2 +K|QED = O(αms, αmq ln mq) as a result of degeneracy in the SU(3)F limit ms = md = +mu. The corrections seem rather large and are largely kinematic, the larger K mass in the +Kaon propagator [46]. Lattice Monte Carlo simulations have settled this matter to large +precision [47] (cf. [48] for a review). +References +[1] A. Zee, “The Proton - neutron mass difference problem and related topics,” Phys. Rept. 3 +(1972) 127–192. +[2] T. Das, G. S. Guralnik, V. S. Mathur, F. E. Low, and J. E. Young, “Electromagnetic mass +difference of pions,” Phys. Rev. Lett. 18 (1967) 759–761. +[3] S. Borsanyi et al., “Ab initio calculation of the neutron-proton mass difference,” Science 347 +(2015) 1452–1455, arXiv:1406.4088 [hep-lat]. +[4] D. Giusti, V. Lubicz, C. Tarantino, G. Martinelli, F. Sanfilippo, S. Simula, and N. Tantalo, +“Leading isospin-breaking corrections to pion, kaon and charmed-meson masses with +Twisted-Mass fermions,” Phys. Rev. D 95 no. 11, (2017) 114504, arXiv:1704.06561 +[hep-lat]. +[5] I. I. Bigi and A. I. Sanda, CP violation, vol. 9. Cambridge University Press, 9, 2009. +– 17 – + +[6] G. C. Branco, L. Lavoura, and J. P. Silva, CP Violation, vol. 103. 1999. +[7] R. P. Feynman and G. Speisman, “Proton-Neutron Mass Difference,” Phys. Rev. 94 no. 2, +(1954) 500. +[8] M. Cini, E. Ferrari, and R. Gatto, “Neutron-Proton Mass Difference by Dispersion Theory,” +Phys. Rev. Lett. 2 no. 1, (1959) 7–9. +[9] W. N. Cottingham, “The neutron proton mass difference and electron scattering +experiments,” Annals Phys. 25 (1963) 424–432. +[10] J. F. Donoghue and A. F. Perez, “The Electromagnetic mass differences of pions and kaons,” +Phys. Rev. D 55 (1997) 7075–7092, arXiv:hep-ph/9611331. +[11] W. A. Bardeen, J. Bijnens, and J. M. Gerard, “Hadronic Matrix Elements and the pi+ pi0 +Mass Difference,” Phys. Rev. Lett. 62 (1989) 1343. +[12] P. Colangelo, M. Ladisa, G. Nardulli, and T. N. Pham, “Electromagnetic mass difference of +heavy mesons,” Phys. Lett. B 416 (1998) 208–215, arXiv:hep-ph/9709201. +[13] M. A. Luty and R. Sundrum, “Heavy meson electromagnetic mass differences from QCD,” +Phys. Rev. D 52 (1995) 1627–1638, arXiv:hep-ph/9502259. +[14] A. Walker-Loud, C. E. Carlson, and G. A. Miller, “The Electromagnetic Self-Energy +Contribution to Mp − Mn and the Isovector Nucleon MagneticPolarizability,” Phys. Rev. +Lett. 108 (2012) 232301, arXiv:1203.0254 [nucl-th]. +[15] T. Hambye, “A Unified treatment of mass differences for light and heavy pseudoscalars,” +Phys. Lett. B 319 (1993) 300–306. +[16] J. C. Collins, “Renormalization of the Cottingham Formula,” Nucl. Phys. B 149 (1979) +90–100. [Erratum: Nucl.Phys.B 153, 546 (1979), Erratum: Nucl.Phys.B 915, 392–393 (2017)]. +[17] J. Gasser, M. Hoferichter, H. Leutwyler, and A. Rusetsky, “Cottingham formula and nucleon +polarisabilities,” Eur. Phys. J. C 75 no. 8, (2015) 375, arXiv:1506.06747 [hep-ph]. +[Erratum: Eur.Phys.J.C 80, 353 (2020)]. +[18] X. Feng, L. Jin, and M. J. Riberdy, “Lattice QCD Calculation of the Pion Mass Splitting,” +Phys. Rev. Lett. 128 no. 5, (2022) 052003, arXiv:2108.05311 [hep-lat]. +[19] R. Zwicky, “QED-Corrections to Weak Decays,” Symmetry 13 no. 11, (2021) 2036, +arXiv:2205.06194 [hep-ph]. +[20] S. Nabeebaccus and R. Zwicky, “Resolving charged hadrons in QED — gauge invariant +interpolating operators,” JHEP 11 (2022) 101, arXiv:2209.06925 [hep-ph]. +[21] M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, “QCD and Resonance Physics. +Theoretical Foundations,” Nucl. Phys. B147 (1979) 385–447. +[22] V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, “Are All Hadrons +Alike? ,” Nucl. Phys. B 191 (1981) 301–369. +[23] E. V. Shuryak, “Pseudoscalar Mesons and Instantons,” Nucl. Phys. B 214 (1983) 237–252. +[24] Y. Y. Balitsky, V. M. Braun, and A. V. Kolesnichenko, “The decay Sigma+ —> p gamma in +QCD: Bilocal corrections in a variable magnetic field and the photon wave functions,” Sov. +J. Nucl. Phys. 48 (1988) 348–357. +[25] B. Pullin and R. Zwicky, “Radiative Decays of Heavy-light Mesons and the f (T ) +H,H∗,H1 Decay +Constants,” arXiv:2106.13617 [hep-ph]. +– 18 – + +[26] V. A. Nesterenko and A. V. Radyushkin, “Sum Rules and Pion Form-Factor in QCD,” Phys. +Lett. B 115 (1982) 410. +[27] M. Kirk, A. Lenz, and T. Rauh, “Dimension-six matrix elements for meson mixing and +lifetimes from sum rules,” JHEP 12 (2017) 068, arXiv:1711.02100 [hep-ph]. [Erratum: +JHEP 06, 162 (2020)]. +[28] T. Janowski, B. Pullin, and R. Zwicky, “Charged and neutral Bu,d,s → γ form factors from +light cone sum rules at NLO,” JHEP 12 (2021) 008, arXiv:2106.13616 [hep-ph]. +[29] Particle Data Group Collaboration, P. A. Zyla et al., “Review of Particle Physics,” PTEP +2020 no. 8, (2020) 083C01. +[30] G. Colangelo, S. Lanz, H. Leutwyler, and E. Passemar, “Dispersive analysis of η → 3π,” Eur. +Phys. J. C 78 no. 11, (2018) 947, arXiv:1807.11937 [hep-ph]. +[31] M. Gell-Mann, R. J. Oakes, and B. Renner, “Behavior of current divergences under SU(3) x +SU(3),” Phys. Rev. 175 (1968) 2195–2199. +[32] J. F. Donoghue, E. Golowich, and B. R. Holstein, Dynamics of the standard model, vol. 2. +CUP, 2014. +[33] M. A. Shifman, “Quark hadron duality,” in 8th International Symposium on Heavy Flavor +Physics, vol. 3, pp. 1447–1494. World Scientific, Singapore, 7, 2000. arXiv:hep-ph/0009131. +[34] I. I. Y. Bigi, M. A. Shifman, and N. Uraltsev, “Aspects of heavy quark theory,” Ann. Rev. +Nucl. Part. Sci. 47 (1997) 591–661, arXiv:hep-ph/9703290. +[35] G. S. Bali, F. Bruckmann, M. Constantinou, M. Costa, G. Endrodi, S. D. Katz, +H. Panagopoulos, and A. Schafer, “Magnetic susceptibility of QCD at zero and at finite +temperature from the lattice,” Phys. Rev. D 86 (2012) 094512, arXiv:1209.6015 +[hep-lat]. +[36] C. McNeile, A. Bazavov, C. T. H. Davies, R. J. Dowdall, K. Hornbostel, G. P. Lepage, and +H. D. Trottier, “Direct determination of the strange and light quark condensates from full +lattice QCD,” Phys. Rev. D 87 no. 3, (2013) 034503, arXiv:1211.6577 [hep-lat]. +[37] B. L. Ioffe, “Condensates in quantum chromodynamics,” Phys. Atom. Nucl. 66 (2003) 30–43, +arXiv:hep-ph/0207191. +[38] M. Jamin and B. O. Lange, “fB and fBs from QCD sum rules,” Phys. Rev. D65 (2002) +056005, arXiv:hep-ph/0108135 [hep-ph]. +[39] P. Ball and R. Zwicky, “SU(3) breaking of leading-twist K and K* distribution amplitudes: +A Reprise,” Phys. Lett. B 633 (2006) 289–297, arXiv:hep-ph/0510338. +[40] S. Weinberg, “Precise relations between the spectra of vector and axial vector mesons,” +Phys. Rev. Lett. 18 (1967) 507–509. +[41] R. Zwicky, “A brief Introduction to Dispersion Relations and Analyticity,” in Quantum Field +Theory at the Limits: from Strong Fields to Heavy Quarks. 10, 2016. arXiv:1610.06090 +[hep-ph]. +[42] A. Bharucha, D. M. Straub, and R. Zwicky, “B → V ℓ+ℓ− in the Standard Model from +light-cone sum rules,” JHEP 08 (2016) 098, arXiv:1503.05534 [hep-ph]. +[43] D. J. Gross, S. B. Treiman, and F. Wilczek, “Light Quark Masses and Isospin Violation,” +Phys. Rev. D 19 (1979) 2188. +[44] E. Witten, “Some Inequalities Among Hadron Masses,” Phys. Rev. Lett. 51 (1983) 2351. +– 19 – + +[45] R. F. Dashen, “Chiral SU(3) x SU(3) as a symmetry of the strong interactions,” Phys. Rev. +183 (1969) 1245–1260. +[46] J. F. Donoghue, B. R. Holstein, and D. Wyler, “Electromagnetic selfenergies of pseudoscalar +mesons and Dashen’s theorem,” Phys. Rev. D 47 (1993) 2089–2097. +[47] Z. Fodor, C. Hoelbling, S. Krieg, L. Lellouch, T. Lippert, A. Portelli, A. Sastre, K. K. Szabo, +and L. Varnhorst, “Up and down quark masses and corrections to Dashen’s theorem from +lattice QCD and quenched QED,” Phys. Rev. Lett. 117 no. 8, (2016) 082001, +arXiv:1604.07112 [hep-lat]. +[48] A. Portelli, “Inclusion of isospin breaking effects in lattice simulations,” PoS +LATTICE2014 (2015) 013, arXiv:1505.07057 [hep-lat]. +– 20 – + diff --git a/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f13211215981c6ce547f39c2ece80a94a939028 --- /dev/null +++ b/bdE4T4oBgHgl3EQfPQyn/content/tmp_files/load_file.txt @@ -0,0 +1,1088 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf,len=1087 +page_content='Prepared for submission to JHEP CERN-TH-2023-005 Isospin Mass Differences of the B, D and K Matthew Rowe,1 Roman Zwicky1,2 1Higgs Centre for Theoretical Physics, School of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ, Scotland 2Theoretical Physics Department, CERN, Esplanade des Particules 1, Geneva CH-1211, Switzerland E-mail: m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='rowe@sms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='uk, roman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='zwicky@ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='uk Abstract: We compute the electromagnetic mass difference for the B-, D- and K-mesons using QCD sum rules with double dispersion relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the B- and D-mesons we also compute the linear quark mass correction, whereas for the K the standard soft theorems prove more powerful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The mass differences, which have not previously been computed via a double dispersion, are fully consistent with experiment, albeit with large uncertainties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Contents 1 Introduction 1 2 Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 B- and D-meson with Pseudoscalar Operators 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Numerics 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 K-meson with Axial Operators 6 3 Linear Quark Mass Correction ∆mH|mq 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Numerics 9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D 10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3 Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K 10 4 Final Overview and Conclusions 11 A Variants of Quark-Hadron Duality 12 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Weight function ω(s) = s 13 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 Weight function ω(s) = 1 s−η 14 B Numerical Input 14 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Decay constants fB, fD and fK 14 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='04972v1 [hep-ph] 12 Jan 2023 C Self Energies and Condensates for ∆mH|QED 14 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Perturbation theory 15 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 Condensates 16 D Some Classic Results 16 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Linear quark mass dependence from Feynman-Hellman theorem 16 D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 ∆mπ|QED from soft theorem and Weinberg sum rules 16 1 Introduction The mass difference of charged and neutral hadrons, ∆mH = mH+ − mH0 , H = B, D, K, π, p , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) is an isospin breaking effect and has intrigued particle physicists from the very beginning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In particular the proton-neutron [1] and the π+-π0 [2] mass difference have been discussed extensively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' At the microscopic level ∆mH is driven by differences in the electric charge and the mass mq of the hadron’s light valence quark q = u, d ∆mB = ∆mB|QED + ∆mB|mq .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) The sign and the size depends on the hadron in question and QED stands for quantum electrodynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1,2 Recent lattice Monte Carlo simulations [3, 4] have verified this to a high accuracy, for light and charm mesons, by computing both the charged and the neutral mass and effectively using (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' One may take a different approach and compute the two differences in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) separately by using the second order perturbation theory formula (with H = B for definiteness)3 δmB|QED = −iα 2mB(2π)3 � d4q T (B) µν (q)∆µν(q) + O(α2) , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) with ∆mB|QED ≡ δmB+|QED − δmB0|QED , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) known in the current algebra era [7, 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Above ∆µν(q) = 1 q2 (−gµν+(1−ξ) qµqν q2 ) is the photon propagator, α = e2/(4π) the fine structure constant and T (B) µν (q) is the (uncontracted) forward Compton scattering tensor, T (B) µν (q) = i � d4xe−iq·x⟨B|Tjµ(x)jν(0)|B⟩ , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) 1Strictly speaking the separation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) is not well-defined as it requires fixing a (quark mass) renormal- isation scheme e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In turn this is a reason for being interested in the problem as, especially light, quark masses cannot be determined to high precision without folding in QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This shows for example in the D-meson results in comparison between [3] and [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For our purposes ∆mB|mq is as defined from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2Effects due to the weak force are of O(Λ2 QCD/m2 W ) with respect to QED and are thus negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Similar effects are relevant in the context of neutral meson mixing e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g [5, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3Note that in the literature the notation ∆m2 B ≡ 2mB∆mB is also frequently used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 1 – with jα = � q Qq¯qγαq, the electromagnetic current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In 1963, Cottingham [9] improved this formula by parameterising it in terms of form factors and relating it to structure functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' That is, by deforming the contour q0 → iq0 and writing a dispersion representation, assessing the number of subtraction terms of the form factors thus allowing him to write the contribution as an integral over Q2 = −q2 ≥ 0 and ν = p · q/mB in the physical region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This opened the gate for many phenomenolog- ical studies saturating the dispersion relation by a few terms beyond the elastic one and using high energy constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is a formidable task as one requires the knowledge of a correlation function over the entire energy range akin to the situation of the vacuum po- larisation for the anomalous magnetic moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Some examples are for K, π [10, 11] using chiral perturbation theory (and large Nc), for B and D [12, 13] using heavy quark theory (and large Nc), for the proton-neutron [14] with updated fits to the structure functions and an approach to B, D, K and π using vector meson dominance [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Another interesting point, not unrelated, is that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) requires renormalisation [16] and it was argued that it is justified to cut-off the Q2-integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Debates about subtraction terms are ongoing cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [14] and the response [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Here we do not follow this phenomenological approach but evaluate (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) directly in Minkowski space using double dispersion relation sum rules and thus determine the mass differences from a unified framework (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' same hadronic input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4 To the best of our knowledge this has not been done previously with sum rules, presumably due to the subtleties of non gauge-invariant interpolating currents [19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For example, in leptonic decays this requires the introduction of a non-local interpolating operator (or an auxiliary scalar field carrying the charge to infinity) for gauge invariance and reproduction of all infrared sensitive logs [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' However, in the case at hand this is not necessary, as verified by explicit computation, since ∆mB is an infrared safe quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' An efficient and transparent way to implement the first order quark mass corrections is to make use of the Feynman-Helmann theorem which gives m2 B|mq = � q mq⟨B|¯qq|B⟩ , (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) as rederived in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the difference (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) this gives ∆mB ��mq = (mu − md) 2mB ⟨B|¯qq|B⟩ + O((mu − md)2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) The matrix element ⟨B|¯qq|B⟩ can be evaluated in the isospin degenerate limit q = u = d since we work to leading order (LO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the B- and the D-meson we compute this matrix element whereas for the Kaon and the pion a soft theorem ⟨π|¯qq|π⟩ = − 2 f2π ⟨0|¯qq|0⟩ + O(m2 π/m2 ρ), with fπ ≈ 131 MeV), due to their pseudo-Goldstone nature, proves more effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In principle one could compute all the ∆mB|mq-effects with the QCD analogue of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) but this would be rather inefficient and we further comment in the relevant section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 4This function has been evaluated for the pion on the lattice with good agreement with experiment only very recently using the infinite volume reconstruction method [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 2 – Another noteworthy aspect is that we were not able to obtain stable sum rules for the pion (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The paper is organised as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 the electromagnetic computation is pre- sented, followed by the quark mass correction in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We give an overview of the results and the conclusions in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Comments on quark hadron duality, the numerical input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' some (extra) computation and useful classic results are collected in Apps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A, C, B and D respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 Electromagnetic Mass Difference ∆mH|QED from QCD Sum Rules The electromagnetic mass difference follows from the formula quoted in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) and it is our task to evaluate this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The main theoretical challenge is to incorporate the two hadrons for which a non-perturbative method is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We use QCD sum rules [21] with a double dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The first step involves the adaption of an interpolating operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the heavy mesons a pseudoscalar current is suitable and has proven to give good results in many other contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For particles of light quark masses, and Goldstone particles in particular [22], pseudoscalar interpolating operators are unsuitable as they are infested by so-called direct instantons [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5 We therefore discuss the heavy mesons and the K-meson separately in Secs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' An important criteria in assessing the validity of our sum rules is the so-called daughter sum rule which we consider worthwhile to present now.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In the simple single dispersion relation case this criteria reads m2 B(s0, M2) = � s0 cut e−s/M2ρ(s)sds/( � s0 cut e−s/M2ρ(s)ds) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) where M2 is the Borel parameter, the “cut” marks the onset of physical states, ρ(s) = rBδ(s−m2 B)+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' is the spectral density and the dots stand for states above the continuum threshold s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Formally, the residue rB drops out in the ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In practice ρ(s) is a continuous function in partonic computations and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) should be seen as a self-consistency criteria for an s0 in the range of (mB + 2mπ)2 of (mB + 4mπ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' If that is the case then Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) can be used to fix the central value of s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 B- and D-meson with Pseudoscalar Operators As motivated at the beginning of the section, the default choice for heavy-light 0− meson interpolating operators are JB = m+¯biγ5q , ZB ≡ ⟨ ¯B|JB|0⟩ = m2 BfB , m+ ≡ (mb + mq) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) In determining (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3), one of the main challenges, is that the momenta for the two B-meson is degenerate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We bypass this problem by introducing an auxiliary momentum r into one 5For the heavy mesons axial interpolating operators are unsuitable because the 1+ states are relatively low, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' for the JP = 0− B-meson with mB ≈ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='28 GeV there is a 1+ B1(5721) with mB1 ≈ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='72 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is too close to the two pion threshold and even below the typical continuum threshold s0 ≈ (6 GeV)2 assumed for the pseudoscalar operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 3 – b ¯q γ Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Diagrams contributing to the correlation function in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) with the double line repre- senting the b-quark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (left) main diagram of the QbQq mixed type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (middle) b- and q-quark self energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (right) ⟨¯qq⟩-condensate part to b-quark self energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' There is no corresponding part for the q-quark self energy since ⟨¯bb⟩ is negligibly small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the mass difference only the first one is relevant while the others are useful to obtain stable sum rules as described in the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' of the currents and let it flow out at one of the two interpolating operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Concretely we start from Γqq′(p2, ˜p2) = c i3 � x,y,z,q ei(˜pz−ipy−(q+r)x)⟨0|TJ† B(z)jµ(x)jν(0)JB(y)|0⟩∆µν(q)|QqQq′ = � ∞ 0 ds � ∞ 0 d˜s ρΓqq′(s, ˜s) (s − p2)(˜s − ˜p2) = Z2 Bδqq′mB (m2 B − p2)(m2 B − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) with c ≡ −iα 2mB(2π)3 , ˜p = p + r, shorthands xp = x · p, � q,x = � d4qd4x and the density is given by (2πi)2ρΓqq′(s, ˜s) = discs,˜s[Γqq′(s, ˜s)] , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) the double discontinuity with further relevant explanations at the end of the section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The quantity ∆qq′mB denotes the part proportional to the QqQq′-charges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Of course the aux- iliary momentum r has to disappear from the final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is achieved by the on-shell condition “˜p2 = p2” and is implemented in practice by treating them equally (p-˜p symme- try) and requiring the daughter sum rule to be satisfied reasonably well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The QCD sum rule is then given by δqq′mB = 1 Z2 B � ¯δ(a)(m2 +) m2 + ds e (m2 B−s) M2 � ¯δ(a)(s) m2 + d˜s e (m2 B−˜s) M2 ρΓqq′(s, ˜s) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) where M2 is the Borel parameter from the Borel transformation and the ¯δ(a) is the contin- uum threshold ¯δ(a)(s) = 21/aσ0 � 1 − � s 21/aσ0 �a�1/a , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) which is complicated for double dispersion sum rules [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Here it is implemented as in [25] but simplified since the two hadrons are identical implying M2 → 2 ˆ M2 and ˜s0 = ˜t0 = σ(a) 0 21/a (allowing for elimination of those parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The number σ0 ≈ 35 GeV2 takes on the rˆole of s0 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and we shall use the notation s0 ≡ σ0 hereafter for reasons of familiarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The parameter a is a model-parameter and the independence of the result is a measure of the quality of the result itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Let us turn to the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In perturbation theory there is the diagram connecting the q- to the b-quark and the self energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We focus on the former, as it is numerically – 4 – dominant, and present the self energies and the condensate contribution in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The computation can be done analytically and we obtain the following compact result for the density ρΓbq = NcαQqQbm2 + 32π3mB � λ˜λ s˜s � A + B b ln �a + b a − b �� , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) where a = m2 q − 1 4 √ s˜s � s˜s + (m+m−)2� + � q ↔ b � , b = 1 2 � λ˜λ s˜s , A = m2 − , B = � Y ˜Y s˜s + 1 2m2 q √ s˜s(Y + ˜Y ) − 1 4m2 − � s + ˜s + 4mbmq + 2m2 q � − 1 4m2 + √ s˜s � + � q ↔ b � , with further abbreviations m± = mb ± mq , λ = λ(s, m2 b, m2 q) , Y = s − m+m− 2s , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8) λ(x, y, z) = x2 +y2 +z2 −2xy −2xz −2yz is the K¨all´en function and in the tilde quantities ˜Y and ˜λ we have s → ˜s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A few words about the computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We have taken the discontinuity in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) using Cutkosky rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A crucial point is that we do not cut the photon propagator as this would be a QED correction to the B-meson state and does not contribute to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This amends the meaning of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Let us turn to the usage of the auxiliary momentum r in the context of double dis- persion sum rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' First we note that this is different to a form factor computation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F π→π(q2) [26], where the momentum transfer naturally takes on the rˆole of this variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is closer to ∆F = 2 matrix elements as there is no momentum transfer but the flavour contractions naturally lead to a symmetric configuration (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [27]) which is more straight- forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In fact since our procedure (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) artificially breaks the bq-symmetry, a and B turn out to be non-symmetric whereas b and A remain symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This has to be remedied by the following substitution a → 1 2(a + a|b ↔ q) , B → 1 2(B + B|b ↔ q) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9) which is apparent from the way the Cutkosky cuts work out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We have performed the computation in general gauge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Of course Γqq′ is gauge dependent but as stated earlier its discontinuity in the bq-quark lines are not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is the case since the particles are put on the mass shell and it is important that the quantity is infrared safe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Otherwise, as previously stated, one needs to introduce extra machinery [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Numerics Our numerics have three cornerstones, the hadronic input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2, the daugh- ter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and the choice of a mass scheme for mb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Whereas there is nothing to say about point one, the others are in need of some explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We start with the B-meson case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The daughter sum rule constrains the sum rule parameters: the continuum thresh- old s0 and the Borel parameter M2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Additional constraints, defining the Borel window, – 5 – are the convergence of the condensate expansion and keeping the B-pole term dominant versus the continuum contribution [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Let us turn to the question of the mass scheme which is not independent of the second point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We consider the pole-, the kinetic- and the MS-scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In the pole scheme the b, c-quark self energy contributions (perturbative and condensate, diagrams 2 and 4 in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1) vanish and the sum rules are not stable, that is no Borel window, and we therefore discard it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the MS-scheme the b-quark self energies are dominant with the b-q contribution comparable to the condensates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Since these contri- butions cancel in the observable ∆m, this scheme is not ideal either and we therefore drop it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hence we are left with the kinetic scheme for the b-quark which shows good properties as for the B → γ form factor [28] and the gBB∗γ-couplings [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the c-quark the self energies are not dominant and we use the MS-scheme, also because the kinetic-scheme has proven unsuitable in for gDD∗γ [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' As stated above the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) is used to fix s0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For that purpose it is instructive to define the normalised ratio U(s0, M2) ≡ 1 m2 B m2 B(s0, M2) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10) of the sum rule value over the experimental one which has to be close to unity for self- consistency of the approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This leads to {s0, ˆ M2}B = {35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0), 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5)} GeV2 , {s0, ˆ M2}D = {5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5(1), 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='25)} GeV2 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='11) for which U(s0 ± 1 GeV2, M2)∆mB|QED = 1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='01 , U(s0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 GeV2, M2)∆mD|QED = 1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='01 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 (with mkin b (1 GeV), ¯mc( ¯mc)) and the fB,D sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) for the ZB-factor we get ∆mB|QED = +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='58+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='26 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='23 MeV , ∆mD|QED = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='25+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='89 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='52 MeV , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='12) where the error is obtained by adding the individual errors in quadrature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The dominant error is due to the heavy quark mass mb(c) (50-60%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The Borel mass M2 and duality parameters a each contribute a 20-25% uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The error in a is quantified by taking the standard deviation of the results with a ∈ [ 1 2, 1, 2, ∞].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The errors for the D-meson are larger reflecting the generically inferior quality of the sum rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 K-meson with Axial Operators As explained at the beginning of this section pseudo Goldstone bosons cannot be interpo- lated by pseudoscalar operators and one therefore resorts to axial ones Aµ = ¯q γµγ5 s , ⟨0|Aµ|K(p)⟩ = ipµfK .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13) The correlation function corresponding to (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) assumes the form Γαβ qq′(p2, ˜p2) = ci3 � q � x,y,z ei(˜pz−py−(q+r)x)⟨0|TAα(z)jµ(x)jν(0)A† β(y)|0⟩∆µν(q)|QqQ′q – 6 – = gαβΓ(0) qq′ + pαpβΓ(2) qq′ + O(r) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='14) where the O(r)-terms are not of interest to us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The decisive information is in the pαpβ-term which takes on the form Γ(2) qq′ = f2 Kδqq′m (m2 K − p2)(m2 K − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='15) in a hadronic representation where the dots represent higher states in the spectrum (which includes the K∗-meson in this case).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Let us turn to the computation which involves some practical matters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Computing the double discontinuity of Γ(2) qq′ is laborious as there are open Lorentz indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' One may though obtain the same information from a linear combination of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='14) with contracted indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It follows from Ward identities that (d = 4) Γ(2)(s, s) = 1 s2(1 − d) (sΓα α(s, s) − d Γ(s, s))) , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='16) where we omitted the qq′-subscript for brevity and have set s = ˜s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The generalisation to the s ̸= ˜s is in principle ambiguous but fortunately the differences are not that sizeable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Concretely we use Γ(2)(s, ˜s) = 1 s˜s(1 − d) �1 2(s + ˜s)Γα α(s, ˜s) − d Γ(s, ˜s)) � , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17) and the analogous expression of (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) is lengthy for the Kaon and is given in a Mathematica ancillary notebook attached to the arXiv version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Changing the prescription (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17) by 1 2(s + ˜s) → √ s˜s results in a 15%-change which is sizeable but not extremely large and well within the error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In addition we use a weight function 1/s˜s as described in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 as otherwise the daughter sum rule is off by at least a factor of two which is very large in view of how well it works in all other cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Proceeding as before we obtain the following values {s0, ˆ M2}K = {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7(1), 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='95(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5)} GeV2 , U(s0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1, M2)∆mK|QED = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='00 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10 , (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18) for the sum rule parameters and the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2, the fK sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18) we get ∆mK|QED = +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='85+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='42 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='66 MeV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='19) Scale dependent quantities are evaluated at µ = 2 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The uncertainty again comes from adding individual errors in quadrature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The dominant uncertainty (75%) comes from the ms mass with the remaining uncertainty due to the the duality parameter a in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' As stated in the introduction, the pion proved more difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' That is we were not able to find stable sum rules satisfying the daughter sum rule for reasonable values of the con- tinuum threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6 We believe that is due to its small mass mπ which is considerably below the other hadronic masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Conversely the Kaon mass, while being a pseudo-Goldstone, is much closer to the other hadrons (due to ms being close to ΛQCD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 6The extra disconnected diagram for the π0, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [18], is small since the γ5 generates a Levi-Civita tensor which enforces two extra loops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is reflected in the smallness of the lattice result [18] and also by the fact that the LO chiral Lagrangian does not contribute to π0 (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 7 – 3 Linear Quark Mass Correction ∆mH|mq As stated in the introduction (and cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) the O(mq)-corrections are governed by ⟨H|¯qq|H⟩ (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the B, D-meson we compute this matrix element from QCD sum rules in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1, using similar techniques as for the QED correction, and for light mesons we resort to soft theorems cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3 as the corresponding sum rules are inferior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 QCD Sum Rule Computation of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D In order to anticipate the hierarchy of diagrams shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 it is worthwhile to con- template on the heavy quark behaviour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The matrix element scales like (H = B) for definiteness).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' ⟨B|¯qq|B⟩ = O(mb) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) for relativistically normalised states, ⟨B(p)|B(q)⟩ = 2EB(⃗p)(2π)3δ(3)(⃗p − ⃗q), due to the factor EB = O(mb).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' On the one hand, the operator ¯qq demands a chirality flip in pertur- bation theory and this cannot come from the mb-mass since the latter is entirely kinematic as we have just established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' On the other hand the condensate contribution itself ⟨¯qq⟩ does not require this flip and is therefore unsuppressed and numerically leading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' b ¯q g Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Diagrams contributing to the matrix element ⟨B|¯qq|B⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' They are analogous to the ones in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1 but the square blob denotes the insertion of the ¯qq-operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Perturbation theory is minimal and the quark condensate diagram is the main contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The mixed condensate diagrams ⟨¯qGq⟩ are mainly useful to stabilise the sum rule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' To do the computation we start from the following correlation function Π(p2, ˜p2, r) = i2 � y,z ei(˜pz−py−xr)⟨0|TJ† B(z)(¯qq)(x)JB(y)|0⟩ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) where JB has been defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) and the auxiliary momentum r takes on the same rˆole as before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The double dispersion relation of the correlation functions reads Π(p2, ˜p2, r) = � dsd˜s ρΠ(s, ˜s) (s − p2 − i0)(˜s − ˜p2 − i0) = Z2 B⟨ ¯B|¯qq| ¯B⟩ (m2 B − p2)(m2 B − ˜p2) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) with (2πi)2ρΠ(s, ˜s) = discs,˜s[Π(s, ˜s)], and the matrix element is then given by ⟨ ¯B|¯qq| ¯B⟩ = 1 Z2 B � ¯δ(a)(m2 +) m2 + ds e (m2 B−s) M2 � ¯δ(a)(s) m2 + d˜s e (m2 B−˜s) M2 ρΠ(s, ˜s) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) with ¯δ(a) defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The three contributions depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 are described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 8 – Perturbation theory is given by ρΠ(s, ˜s) = m2 +Ncmq 2π2 s − (mb − mq)2 s + m2q − m2 b λ 1 2 δ(˜s − s) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) with the anticipated O(mq)-suppression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This term is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The ⟨¯qq⟩ condensate evaluates to ⟨ ¯B|¯qq| ¯B⟩ = −4m2 +m2 b⟨¯qq⟩ Z2 B e 2(m2 B−m2 b) M2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) which is not suppressed by O(mq) and thus dominant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The mixed condensate yields ⟨ ¯B|¯qq| ¯B⟩ = − m2 +⟨¯qσsggGq⟩ Z2 B e 2(m2 B−m2 b) M2 � (1 − 3m2 b M2 ) + (5 8 + 2m2 b M2 − 4m4 b M4 ) � , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) which is not suppressed either as it is in the same chirality representation as the quark condensate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The first and second term in round brackets are from the third and fourth diagram in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We consider it worthwhile to comment how the lack of mq-suppression in the condensate contribution arises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Its origin is the propagator 1/(r2 − m2 q + iϵ) (we work in the ⃗r = 0 frame) r2 − m2 q + iϵ = (√s − ( √ ˜s + mq − iϵ′))(√s − ( √ ˜s − mq + iϵ′)) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8) which when cut gives a term of the form √s mq δ(s − ( √ ˜s + mq)2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The 1/mq thus removes the O(mq)-suppression in the numerator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Numerically perturbation is entirely negligible and this is also the reason for not including the gluon condensate which is expected to be further suppressed O(Λ4 QCD/M 4) as compared to perturbation theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Numerics The basic procedure for the numerics is the same as described in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' However, the choice of scheme is not as important in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Any of the schemes, pole, kinetic and MS give similar results and indicate stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The situation is certainly clearer with respect to the mb-mass itself as the matrix element is O(mb) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and ∆mB|mq itself is O(m0 b) whereas ∆mB|QED is computed from a non-local correlation function where the mb- dependence is more difficult to track.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Since the perturbative contribution is suppressed, there is no s0 dependence (there would be at NLO in αs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hence we can fix the Borel value M2 to satisfy the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10), obtaining the following sum rule parameters {s0, ˆ M2}B = {35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0} GeV2 , {s0, ˆ M2}D = {6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='75} GeV2 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9) and daughter sum rules U(s0, ˆ M2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='15 GeV)∆mB|mq = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='00+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='03 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='02 , – 9 – U(s0, ˆ M2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='05 GeV)∆mD|mq = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='00+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='20 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10) Using the input parameters in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 (with mkin b (1 GeV), ¯mc( ¯mc)), the fB,D sum rule to LO (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9) we get ⟨ ¯B|¯qq| ¯B⟩µ=1 GeV = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='99+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='99 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='41 GeV , ⟨ ¯D|¯qq| ¯D⟩µ= ¯mc GeV = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='40+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='78 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='71 GeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='11) for the matrix elements and ∆mB|mq = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='88+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='49 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='71 MeV , ∆mD|mq = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='68+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='48 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='38 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='12) for the mass differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' As this is a LO computation the errors are large, primarily coming from M2 with a small contribution (20%) from the light quark masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Note that the set value of M2 is not independent of higher order αs corrections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For the D-meson especially, the convergence of the sum rule is not good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is reflected in the mixed condensate contributing a sizeable 20%-uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 SU(3)F estimates of ⟨ ¯H|¯qq| ¯H⟩ for H = B, D Alternatively, one may use SU(3)F flavour symmetry ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩ to estimate ⟨B|¯qq|B⟩ [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Following this analysis one may write (mud ≡ 1 2(mu + md)) (2m2 Bs − m2 B+ − m2 B0) = 2(ms − mud)⟨B|¯qq|B⟩ , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13) from which ⟨B|¯qq|B⟩ ≈ m2 Bs − m2 B (ms − mud) , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='14) follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Employing the input from the PDG [29] this leads to7 ∆mB|mq = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='37+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='35 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='43 ± 20%SU3 MeV , ∆mD|mq = +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='81+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='51 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='41 ± 20%SU3 MeV .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='16) We have added a characteristic 20% SU(3)F -violation due to the use of the ⟨B|¯qq|B⟩ ≈ ⟨Bs|¯ss|Bs⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The result are well compatible with (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='12) and we shall not use them any further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Note that in the heavy quark limit we have ∆mB|mq = −∆mD|mq since the c and b are up and down quark types respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This heavy quark limit relation holds reasonably as already observed in [12] (with slightly different input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3 Soft Goldstone estimate of ⟨L|¯qq|L⟩ for L = π, K The matrix elements ⟨L|¯qq|L⟩ where L = π, K is a pseudo-Goldstone boson may be es- timated using soft-pion techniques which in this case lead to the famous GMOR-relation [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Concretely [32] m2 π+,0 = (mu + md)B0 , m2 K+ = (mu + ms)B0 , m2 K0 = (md + ms)B0 , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17) 7Or taking the η → 3π analysis [30], which in this case makes a difference, results in ∆mB|mq = −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='54+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18 ± 20%SU3 MeV , ∆mD|mq = +3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='01+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='21 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='20 ± 20%SU3 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='15) a more precise result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 10 – which are to first order in the quark masses, with no QED corrections and the constant is B0 = − 2⟨¯qq⟩ f2π ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='26 GeV at µ = 2 GeV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We see that for the pions there is no difference to linear order which is a consequence of isospin [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The pion mass splitting is a ∆I = 2 isospin effect since the relevant matrix element has two pion states where the quark masses themselves are of ∆I = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hence it takes at least two powers of the quark mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Fortunately, the latter follows in a straightforward manner from chiral perturbation theory and one obtains to LO ∆mK|mq = mu − md ms − mud m2 K − m2 π 2mK = mu − md 2mud m2 π 2mK = − 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='74+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='98 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='21 MeV , ∆mπ|mq = 1 16 md − mu ms − mud md − mu mud mπ = + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='16+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='05 MeV , (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18) using the values from the PDG [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' As expected the pion contribution is rather small as a result of being second order in the quark mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is noteworthy that one obtains ∆mK|mq ≈ −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7 MeV when using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17) directly which can be seen as a SU(3)F correction which is well covered by the quoted uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 4 Final Overview and Conclusions In this paper we have computed the mass difference of the charged and neutral B-, D- and K-mesons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The results, which originate from electromagnetic and quark mass effects, are summarised and contrasted with experimental values in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The electromagnetic contribution is computed from the second order formula (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 and may be regarded as the core part of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' ∆mπ|QED is taken from a soft-pion theorem (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) for completeness and comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Quark mass effects are obtained from the Feynman- Hellman formula (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) and its corresponding matrix element is computed in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 for the B and the D respectively whereas for the K and the π a soft theorem turns out to be more reliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The results obtained are consistent with the current experimental values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The uncer- tainties are above 20% and indeed more cannot be expected from a double dispersion sum rule at leading order in the strong coupling constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Experimental uncertainties are one or two orders of magnitude lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The values in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1 deserves some comments as they are not easily guessed by rules of thumb by a practitioner in non-perturbative QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The parametric estimate of ∆mH|QED = c Qeff H α πΛQCD with ΛQCD = 200 MeV and Qeff D = 2Qeff B,K = 2/3, leads to c ≈ 10-20 which is a rather large number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' To put this into perspective, one should keep in mind that these kind of estimates are not straightforward as the mass difference is obtained from a non-local (long distance) correlation function (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The scale for the quark mass effect is of course set me mu−md ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5 MeV and its sign depends on whether the non q = u, d quark is of the up (charm) or down (beauty, strange) type quark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The cancellation to almost an order of magnitude of the electric and the quark mass contribution for the B-meson is remarkable, leading to an inflated uncertainty in ∆mB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The main aim of this paper was to show that it is possible to understand the isospin mass difference from QCD sum rules, that is to obtain values compatible with experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 11 – H ∆mH|QED ∆mH|mq ∆mH ∆mH|PDG[29] B +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='58(24) MeV −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='88(60) MeV a −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='30(65) MeV −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='32(5) MeV D +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='25(70) MeV +2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) MeV a +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='822(15) MeV K +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='85(54) MeV − 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) MeV b −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) MeV −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='934(20) MeV π +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) MeV c +0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='16(5) MeV b +5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5936(5) MeV Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Our values of ∆mH due to the electromagnetic mass difference and the quark masses compared to the PDG values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The entries marked with a are obtained from the ⟨H|¯qq|H⟩ matrix element in conjunction with the Feynman-Hellman theorem (valid to LO in mq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The values in italic should not be regarded as predictions of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' bderived from the soft theorem for (pseudo-) Goldstone bosons (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) and cresults from soft theorem in conjunction with the Weinberg sum rules (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is noteworthy that ∆mπ|mq = O((mu − md)2) which explains its smallness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' For comparison some lattice values ∆mD = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='47(53) MeV and ∆mK = −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='07(15)(15) MeV [4] and ∆mD = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='68(10)(13) MeV [3] which are of course more precise as the lattice is suited for mass determination, even in the presence of QED, and due to the full inclusion of QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The sum rule computation could be improved by including radiative corrections in the strong coupling constant which would be a formidable task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Perhaps more interestingly, the formalism developed in this paper could be applied to baryons to obtain the proton- neutron mass difference for instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Acknowledgments RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We are grateful to Michele Della Morte, Antonin Portelli and Max Hanson for informative comments on the lattice literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A Variants of Quark-Hadron Duality In this appendix we elaborate on variations of quark-hadron duality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is best explained by example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Consider the axial correlator in connection with the K Παβ = i � d4xeipx⟨0|TA† α(x)Aβ(0)|0⟩ = pαpβΠ(p2) + gαβ ˆΠ(p2) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) with Aβ defined in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The Kaon appears in the first structure Π(p2) = f2 K m2 K − p2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) where the dots stand for higher states as usual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' QCD sum rules consists of two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Firstly the observation that Π(p2) ≈ Π(p2)pQCD , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) for some p2 outside the physical region (could be p2 < 0), where pQCD stands for per- turbative QCD with OPE improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In a second step one rewrites Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) as a – 12 – dispersion relation followed by a Borel transform under which (s − p2)−1 → exp � −s/M2� (M2 is the Borel parameter) which results in � ∞ 0 e−s/M2ρ(s) ≈ � ∞ 0 e−s/M2ρpQCD(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) with ρ(s) = 1 2πidiscsΠ(s) = f2 Kδ(s − m2 K) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' and the pQCD part is defined analogously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The one assumption is then that this integral can be broken up as follows � s0 0 e−s/M2ρ(s) ≈ � s0 0 e−s/M2ρpQCD(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) and (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) is sometimes referred to as semi-global quark hadron duality [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' One way to determine s0 is to impose the daughter sum rule (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) and then for consistency with the duality assumption s0 ought to be somewhere between (mK + 2mπ)2 and (mK + 4mπ)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We want to briefly contemplate for which types of weight functions ω(s) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) � s0 0 e−s/M2ρ(s)ω(s) ≈ � s0 0 e−s/M2ρpQCD(s)ω(s) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) with corresponding (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) m2 B = � s0 cut e−s/M2ρpQCD(s)ω(s) s ds/( � s0 cut e−s/M2ρpQCD(s)ω(s)ds) , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7) can hold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The crucial point is to be able to justify the analogue of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Weight function ω(s) = s We might start by rewriting the pαpβ-part in (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) as follows pαpβΠ(p2) = pαpβ p2 (p2Π(p2)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8) For the pQCD part one may directly write ρpQCD(s) → sρpQCD(s) since p2 does not lead to new singularities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Using (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2), the QCD part can be written as (p2Π(p2)) = p2 f2 K m2 K − p2 + · · · = −f2 K + m2 K f2 K m2 K − p2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='9) where −f2 K is a constant that will disappear under Borel transformation and thus ρ(s) → sρ(s) works the very same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The analogue of (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) can be justified in this case by re- placing A† α(x) → −∂2A† α(x) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8 Weight functions of polynomials are generally referred to as moments and are familiar to the community e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' moments in b → cℓν for example [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is quite clear that one can not take arbitrarily high powers of moments as then duality will be challenged since smoothness is lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 8In our case this is not trivial as A† α is not QED gauge invariant but it can still be used at LO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In the general case this requires more thought.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 13 – A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 Weight function ω(s) = 1 s−η Choosing a weight function ω(s) = 1 s − η , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10) is equivalent to working with a subtracted dispersion relation fo the form Π(p2) − Π(η) p2 − η = � dsρ(s) (s − p2)(s − η) + c , (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='11) where c = − � dsρA(s)/(s(s − η)) + Π ′(η) is a subtraction constant such that the limit p2 → 0 comes out correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The constant c is though not important in the end as it vanishes under Borel transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The question of whether one can use (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='10) then turns into the question whether the left hand side can be computed reliably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In our application to Kaons we have chosen η = 0 which is close but still below the Kaon resonance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We have checked that for the fK sum rule with s0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7 GeV2 the agreement is reasonable and this serves at least as a partial justification of the procedure in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B Numerical Input The numerical QCD input is summarised in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 and below we give the numerical values of the the decay constant from sum rule which are the effective LSZ factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Decay constants fB, fD and fK The extraction of both the QED mass shifts and the linear quark mass corrections, require values for the decay constants fB, fD and fK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Note that, for consistency with the rest of this paper these are evaluated at LO in QCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The LO expressions for the pseudoscalar (B, D) and axial (K) correlators are well known (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [38, 39]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The following values fB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='157 GeV , {s0, M2} = {33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5, 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0} GeV2 , fD = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='158 GeV , {s0, M2} = {5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='7, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0} GeV2 , fK = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='147 GeV , {s0, M2} = {1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5} GeV2 , (B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) are obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C Self Energies and Condensates for ∆mH|QED In this appendix we present some extra computations: the self energies and condensate contributions to ∆mB|QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' These are important for stabilising the sum rules but do not affect the actual value of ∆mB|QED per se.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is the case since graphs proportional to Q2 b are cancelled in the mass difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The only non-zero graph contributing to the mass shift is the q-q self energy, but it is numerically negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We wish to note that in all these graphs explicit gauge independence has been verified to hold after the double-cut is taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 14 – JP = 0− Meson masses [29] mB mBs mD mDs mK mπ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='280 GeV 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='367 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='867 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='968 GeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='496 GeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='137 GeV JP = 0− Mass Differences [29] ∆mB ∆mD ∆mK ∆mπ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='32(5) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='822(15) MeV −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='934(20) MeV +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5936(5) MeV Quark masses [29] ¯mb(mb) ¯mc(mc) mpole b mpole c mkin b |1GeV mkin c |1GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='03 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='02 GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='27(2) GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='78(6) GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='67(7) GeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='53(6) GeV 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13(5) ¯ms|2GeV ¯md|2GeV ¯mu|2GeV ¯mud|2GeV ¯mu ¯md ¯ms ¯mud 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4+8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4 MeV 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='67+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='48 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='17 MeV 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='16+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='49 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='26 MeV 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='45+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='35 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='15 MeV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='474+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='056 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='074 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='33+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='67 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='77 Condensates ⟨¯qq⟩|2GeV [35] ⟨¯ss⟩|2GeV [36] m2 0 [37] ⟨0| α πG2|0⟩ [21] −(269(2) MeV)3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='08(16) ⟨¯qq⟩ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8(2) GeV2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='012(4) GeV4 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Summary of input parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Note as inputs into the sum rules we use mH = mH−, as which has a completely negligible impact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The quantity mud ≡ 1 2(mu + md) is the light quark average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The mixed condensate is parameterised as ⟨¯qσsggGq⟩ = m2 0⟨¯qq⟩ as is standard in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Perturbation theory The perturbative b-b self energy graph, after mass renormalisation, takes on the form ρΓbb(s, ˜s) = Ncm2 +Q2 bα 32π3mB λ 1 2 · s − m2 − s + m+m− fR(m2 b)δ(˜s − s) , (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) with the renormalised fR9 fR(m2) = f(m2) + 32π2m2 e2 δZm = � � � � � � � � � � � 2m2 � 4 + 3 ln µ2 m2 � , MS 0, Pole 2m2 � 16µ 3m + 2µ2 m2 � , Kinetic (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) f(m2) = 4m2B0(m2, 0, m2) + (d − 2)A0(m2) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) The functions A0 and B0 are the standard Passarino-Veltman functions with (FeynCalc) normalisation (2πµ)2ϵ � ddk /(iπ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Explicitly these are B0(m2, 0, m2) = 1 ˆϵ + 2 + log � µ2 m2 � , A0(m2) = m2 �1 ˆϵ + 1 + log � µ2 m2 �� , (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) with 1 ˆϵ = 1 ϵ − γE + log 4π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The q-q graph can be obtained by replacing b → q in the result and since it is O(m2 q) it is negligible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 9Note that the vanishing in the pole scheme is clear, by the very definition of the scheme, since we are on-shell after the cuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 15 – C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 Condensates The only relevant condensate graph is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1 (4th diagram).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' With mq → 0 the density is ρ⟨¯qq⟩ Γbb = −m2 bαQ2 b 8πmB mb⟨¯qq⟩δ(s − m2 b)δ(˜s − m2 b)fR(m2 b) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) Light quark mass corrections come from Taylor expanding the quark fields, leading to derivatives of δ-functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is thus more convenient to directly display the resulting mass shift ∆mB|⟨¯qq⟩ = − m2 +αQ2 b 8πmBZ2 B e 2(m2 B−m2 b) M2 ⟨¯qq⟩ � mb − mq 4 � 1 + 4m2 b M2 �� fR(m2 b) (C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) The ⟨¯qq⟩ condensate graph where the photon connects the b and the q-quark is not of short distance type (it leads to 1/m2 q in the propagator) and is therefore omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is similar to the B → γ form factor although in that case the physics is covered by the photon distribution amplitude (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [28]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D Some Classic Results In this appendix we summarise some classic results which are of use and referred to in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1 Linear quark mass dependence from Feynman-Hellman theorem In order to derive the Feynman-Hellman theorem it is convenient to use states ⟨ ˆB(p)| ˆB(q)⟩ = (2π)3δ(3)(⃗p−⃗q) normalised in a non-relativistic manner (the translation to the usual states is | ˆB⟩ = |B⟩/√2EB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Taking the derivative of ⟨ ˆB|H| ˆB⟩ (using ∂mq⟨ ˆB(p)| ˆB(q)⟩ = 0) one obtains mq∂mqEB = mq⟨ ˆB|¯qq| ˆB⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='1) which is equivalent to mq∂mq2E2 B = 2mq⟨B|¯qq|B⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2) which in turn is consistent with m2 B|mq = � q mq⟨B|¯qq|B⟩ , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='3) since the momenta are independent of the mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' This is the relation quoted in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 ∆mπ|QED from soft theorem and Weinberg sum rules Using soft-pion techniques it was shown that [2] ∆mπ|QED = 3α 8πmπf2π � ∞ 0 dss ln µ2 s (ρV (s) − ρA(s)) + O(m2 π/m2 ρ) , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) where ρV = fρδ(s − m2 ρ) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' is the spectral density of the vector triplet current and ρA is the analogous quantity for the axial case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The ln s-term originates from integrating over – 16 – the photon momentum d4q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We refer the reader to [10] for an improved treatment using chiral perturbation theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' In fact, as is the case for all soft-pion results, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) follows from the LO electromagnetic term in the Lagrangian and can therefore be systematically improved beyond the soft limit to the extent that its low energy constants (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' couplings) are known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Using the Weinberg sum rules [40], which are phenomenologically successful, a good estimate was obtained [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Taking the equations resulting from the so-called first and second Weinberg sum rule in [41], then f2 ρ = f2 a1 + f2 π , m2 ρf2 ρ = m2 a1f2 a1 , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5) (where the chiral limit mq = 0 is assumed).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Moreover, the spectral functions are trun- cated after the first vector meson resonances ρ and a1 which can be justified as the chiral symmetry is restored at high energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Using these in expressions in (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4) one gets ∆mπ|QED = 3α 8π m2 ρf2 ρ m2πf2π mπ ln f2 ρ f2ρ − f2π ≈ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='8 MeV , (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) for fπ = 131 MeV, mρ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='77 MeV [29] and fρ = 215 MeV [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Since the quark mass effect is small O((mu −md)2) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='18), one has ∆mπ ≈ ∆mπ|QED which is rather close to the experimental value ∆mπ = +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='5936(5) MeV [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Clearly (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6) is a crude approximation as more detailed analyses [10, 43] including finite width effects yields a result which is ca +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='2 MeV larger [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' We therefore assign an uncertainty of this amount to ∆mπ|QED in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' It is also worthwhile to mention two other interesting aspects in conjunction with ∆mπ|QED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' First, by using by using QCD inequalities it has been shown that ∆mπ|QED ≥ 0 [44] which is of course well satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Second Dashen’s theorem [45] states that ∆m2 π|QED − ∆m2 K|QED = O(αms, αmq ln mq) as a result of degeneracy in the SU(3)F limit ms = md = mu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' The corrections seem rather large and are largely kinematic, the larger K mass in the Kaon propagator [46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lattice Monte Carlo simulations have settled this matter to large precision [47] (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [48] for a review).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zee, “The Proton - neutron mass difference problem and related topics,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3 (1972) 127–192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [2] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Das, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Guralnik, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Mathur, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Low, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Young, “Electromagnetic mass difference of pions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 18 (1967) 759–761.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Borsanyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=', “Ab initio calculation of the neutron-proton mass difference,” Science 347 (2015) 1452–1455, arXiv:1406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='4088 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [4] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Giusti, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lubicz, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Tarantino, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Martinelli, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sanfilippo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Simula, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Tantalo, “Leading isospin-breaking corrections to pion, kaon and charmed-meson masses with Twisted-Mass fermions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 95 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 11, (2017) 114504, arXiv:1704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06561 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [5] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bigi and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sanda, CP violation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Cambridge University Press, 9, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 17 – [6] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Branco, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lavoura, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Silva, CP Violation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [7] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Feynman and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Speisman, “Proton-Neutron Mass Difference,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 94 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2, (1954) 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Cini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Ferrari, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Gatto, “Neutron-Proton Mass Difference by Dispersion Theory,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1, (1959) 7–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [9] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Cottingham, “The neutron proton mass difference and electron scattering experiments,” Annals Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 25 (1963) 424–432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Donoghue and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Perez, “The Electromagnetic mass differences of pions and kaons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 55 (1997) 7075–7092, arXiv:hep-ph/9611331.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [11] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bardeen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bijnens, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Gerard, “Hadronic Matrix Elements and the pi+ pi0 Mass Difference,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 62 (1989) 1343.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Colangelo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Ladisa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nardulli, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Pham, “Electromagnetic mass difference of heavy mesons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 416 (1998) 208–215, arXiv:hep-ph/9709201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Luty and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sundrum, “Heavy meson electromagnetic mass differences from QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 52 (1995) 1627–1638, arXiv:hep-ph/9502259.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Walker-Loud, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Carlson, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Miller, “The Electromagnetic Self-Energy Contribution to Mp − Mn and the Isovector Nucleon MagneticPolarizability,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 108 (2012) 232301, arXiv:1203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='0254 [nucl-th].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [15] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hambye, “A Unified treatment of mass differences for light and heavy pseudoscalars,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 319 (1993) 300–306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Collins, “Renormalization of the Cottingham Formula,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 149 (1979) 90–100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [Erratum: Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='B 153, 546 (1979), Erratum: Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='B 915, 392–393 (2017)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Gasser, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hoferichter, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Leutwyler, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rusetsky, “Cottingham formula and nucleon polarisabilities,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C 75 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 8, (2015) 375, arXiv:1506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06747 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [Erratum: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='C 80, 353 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [18] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Feng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Jin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Riberdy, “Lattice QCD Calculation of the Pion Mass Splitting,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 128 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 5, (2022) 052003, arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='05311 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [19] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “QED-Corrections to Weak Decays,” Symmetry 13 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 11, (2021) 2036, arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06194 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nabeebaccus and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “Resolving charged hadrons in QED — gauge invariant interpolating operators,” JHEP 11 (2022) 101, arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06925 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [21] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Shifman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Vainshtein, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zakharov, “QCD and Resonance Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Theoretical Foundations,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B147 (1979) 385–447.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [22] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Novikov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Shifman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Vainshtein, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zakharov, “Are All Hadrons Alike?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' ,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 191 (1981) 301–369.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [23] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Shuryak, “Pseudoscalar Mesons and Instantons,” Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 214 (1983) 237–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [24] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Balitsky, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Braun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Kolesnichenko, “The decay Sigma+ —> p gamma in QCD: Bilocal corrections in a variable magnetic field and the photon wave functions,” Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 48 (1988) 348–357.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [25] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Pullin and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “Radiative Decays of Heavy-light Mesons and the f (T ) H,H∗,H1 Decay Constants,” arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13617 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 18 – [26] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nesterenko and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Radyushkin, “Sum Rules and Pion Form-Factor in QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 115 (1982) 410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [27] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Kirk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lenz, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rauh, “Dimension-six matrix elements for meson mixing and lifetimes from sum rules,” JHEP 12 (2017) 068, arXiv:1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='02100 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [Erratum: JHEP 06, 162 (2020)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [28] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Janowski, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Pullin, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “Charged and neutral Bu,d,s → γ form factors from light cone sum rules at NLO,” JHEP 12 (2021) 008, arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='13616 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [29] Particle Data Group Collaboration, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zyla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=', “Review of Particle Physics,” PTEP 2020 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 8, (2020) 083C01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [30] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Colangelo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lanz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Leutwyler, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Passemar, “Dispersive analysis of η → 3π,” Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' C 78 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 11, (2018) 947, arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='11937 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Gell-Mann, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Oakes, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Renner, “Behavior of current divergences under SU(3) x SU(3),” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 175 (1968) 2195–2199.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Donoghue, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Golowich, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Holstein, Dynamics of the standard model, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' CUP, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [33] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Shifman, “Quark hadron duality,” in 8th International Symposium on Heavy Flavor Physics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 1447–1494.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' World Scientific, Singapore, 7, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' arXiv:hep-ph/0009131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [34] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bigi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Shifman, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Uraltsev, “Aspects of heavy quark theory,” Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Part.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 47 (1997) 591–661, arXiv:hep-ph/9703290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [35] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bali, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bruckmann, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Constantinou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Costa, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Endrodi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Katz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Panagopoulos, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Schafer, “Magnetic susceptibility of QCD at zero and at finite temperature from the lattice,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 86 (2012) 094512, arXiv:1209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6015 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [36] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' McNeile, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bazavov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Davies, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Dowdall, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hornbostel, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lepage, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Trottier, “Direct determination of the strange and light quark condensates from full lattice QCD,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 87 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 3, (2013) 034503, arXiv:1211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='6577 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [37] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Ioffe, “Condensates in quantum chromodynamics,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Atom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 66 (2003) 30–43, arXiv:hep-ph/0207191.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [38] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Jamin and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lange, “fB and fBs from QCD sum rules,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D65 (2002) 056005, arXiv:hep-ph/0108135 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [39] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Ball and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “SU(3) breaking of leading-twist K and K* distribution amplitudes: A Reprise,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B 633 (2006) 289–297, arXiv:hep-ph/0510338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [40] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Weinberg, “Precise relations between the spectra of vector and axial vector mesons,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 18 (1967) 507–509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [41] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “A brief Introduction to Dispersion Relations and Analyticity,” in Quantum Field Theory at the Limits: from Strong Fields to Heavy Quarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 10, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='06090 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [42] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Bharucha, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Straub, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Zwicky, “B → V ℓ+ℓ− in the Standard Model from light-cone sum rules,” JHEP 08 (2016) 098, arXiv:1503.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='05534 [hep-ph].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [43] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Gross, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Treiman, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Wilczek, “Light Quark Masses and Isospin Violation,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 19 (1979) 2188.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [44] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Witten, “Some Inequalities Among Hadron Masses,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 51 (1983) 2351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 19 – [45] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Dashen, “Chiral SU(3) x SU(3) as a symmetry of the strong interactions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 183 (1969) 1245–1260.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [46] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Donoghue, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Holstein, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Wyler, “Electromagnetic selfenergies of pseudoscalar mesons and Dashen’s theorem,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' D 47 (1993) 2089–2097.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [47] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Fodor, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Hoelbling, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Krieg, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lellouch, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lippert, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Portelli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Sastre, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Szabo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Varnhorst, “Up and down quark masses and corrections to Dashen’s theorem from lattice QCD and quenched QED,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 117 no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' 8, (2016) 082001, arXiv:1604.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='07112 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' [48] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' Portelli, “Inclusion of isospin breaking effects in lattice simulations,” PoS LATTICE2014 (2015) 013, arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content='07057 [hep-lat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} +page_content=' – 20 –' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/bdE4T4oBgHgl3EQfPQyn/content/2301.04972v1.pdf'} diff --git a/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf b/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..84f3dcaba7373de70a05fadc4704b8beac688e04 --- /dev/null +++ b/btE3T4oBgHgl3EQfdgrZ/content/2301.04536v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58515a567fbf265b9002e3c0135ad20e5192b2bfd7b4c1dc15a4c3780a817709 +size 7659197 diff --git a/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..f64e6d3489db97481195824a38de3a9d429d8377 --- /dev/null +++ b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9b31c770ed937ab6798d087dbdee442e6e91a7e4ab59ab18577afae57b3e8137 +size 3014701 diff --git a/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a5656da33d6121ab358a56e0496087ebbc977c64 --- /dev/null +++ b/btE3T4oBgHgl3EQfdgrZ/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1b5967b22c5189359f2952e114bf7d29a8d4d38f5086cf8a422b26fc2badd7c +size 114528 diff --git a/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf b/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5fbbdf0f83bf99f88b28a520b877455efc10edbe --- /dev/null +++ b/ctE0T4oBgHgl3EQfWgAs/content/2301.02278v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:564cab746856271d5efcf116678ad1831fdbaefb1ac67d76bc5e2b599935a41b +size 2545380 diff --git a/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..bd3222cfa40ccf4e703d96881edca3038684cd7f --- /dev/null +++ b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82be084c7725f245a8491a2ef7b92aacb8c86cff6ed924b338e9946d09986494 +size 4980781 diff --git a/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..55b70a76707bf4f6d053bad0fc0adb0d04b2c965 --- /dev/null +++ b/ctE0T4oBgHgl3EQfWgAs/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4ce6c3fd812725f43ab99305a4dafa14a00ed5b631a46213d882c64f7bb2a7c1 +size 155534 diff --git a/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..1ddb4d91e528deefd70dc57d2703861db91aa29c --- /dev/null +++ b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/2301.03653v1.pdf.txt @@ -0,0 +1,949 @@ +A Quantum Mechanical Description of Photosensitization in Photodynamic Therapy using +a Two-Electron Molecule Approximation + +Vincent M. Rossi +Washburn University Department of Physics & Astronomy, Topeka, KS 66621 +vincent.rossi@washburn.edu + +ABSTRACT +A fundamental, Quantum Mechanical description of photoactivation of a generic photosensitizer +and the ensuing transfer of energy to endogenous oxygen as part of the Type II pathway to +photodamage during photodynamic therapy (PDT) is presented. The PS and molecular oxygen +are approximated as two-electron molecules. Conservation of energy and of angular momenta of +the two molecule system are abided via selection rules throughout the four-stage process, +including initial states, absorption of a photon by the PS, conversion of the PS to an excited spin +triplet via intersystem crossing (ISC), and the transition of molecular oxygen to an excited spin +singlet state via a Triplet-Triplet Exchange of electrons with the PS. The provided description of +photosensitization will provide students and researchers with a fundamental introduction to PDT, +while offering the broader population of Quantum Mechanics and Physical Chemistry students +an advanced example of quantum systems in an applied, medical context. + +Keywords: Photosensitization, Photodynamic Therapy (PDT), photochemistry, Dexter Exchange, +Triplet-Triplet Exchange + + +INTRODUCTION +Photodynamic therapy (PDT) is a localized and selective therapy that operates on principles +included under the generic classifications of photobiology, photochemistry and photophysics +(Jacques 1992; Henderson and Dougherty 1992; Hamblin and Mroz 2008; Bonnett 2000; Hasan, +Moore and Ortel 2000). While PDT has found its broadest application and research as a cancer +therapy, it has also been used for antimicrobial therapy for combating antibiotic resistant strains +(Wainwright 1998). Three ingredients are required for PDT—a photosensitizer (PS), light, and +oxygen—in order to induce photochemical damage to its targets. In short, the PS is administered +to the patient and after an appropriate time interval, the targeted site is illuminated with light of +appropriate wavelength to be absorbed by the PS. Upon excitation by light of appropriate energy, +the excited PS interacts with endogenous molecular oxygen in order to create reactive oxygen +species (ROS). The interactions between the excited PS and endogenous molecular oxygen to +generate ROS has been recognized and developed over some time (Kautsky 1939; Keszthelyl et +al. 1999). These ROS then interact with their immediate environment, creating oxidative +damage. Targeted cancer cells or bacteria are eliminated once they reach a threshold of damage +via ROS (Nilsson, Merkel, and Kearns 1972; Schmidt and Bodesheim1998). + +Absorbed photons transfer discrete energies to the PS, raising it from the singlet ground state +(1PS) to an excited singlet state (1PS*), +1PS + hn ® 1PS*, +(1) +where the product of Planck's constant (h) and the frequency of light absorbed (n) represents the +addition of energy via absorption (Fig. 1). The PS may then fluoresce back to its ground state. + +Preferably, the PS in its excited singlet state will transition to its excited triplet state (3PS*) +through Intersystem Crossing (ISC), +1PS* ® 3PS*. +(2) + +Figure 1. The process leading to the preferred Type II path to photodamage starts when the PS is +excited by incident light of energy ℎν. The PS then relaxes via ISC to an excited triplet state, +whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer. + +Once in the excited triplet state, the photosensitizer may then decay back to its ground state +through one of two mechanisms. The first of which, called the Type I pathway to photodamage +in PDT, involves the PS in its excited triplet state interacting with the surroundings, thereby +losing energy and creating free radicals. The resulting free radicals may then react with +endogenous oxygen to form cytotoxic species such as 𝑂𝐻! (Jacques 1992; Wainwright 1998; +Ochsner 1997; Peavy 2002; Prasad 2003; Mata et al. 2006). + +The Type II pathway to photodamage in PDT entails a direct interaction between the PS in its +excited triplet state and endogenous molecular oxygen in its triplet ground state (3O2). Such + +Type II +E +(Etransfer via +1PS +Intersystem +triplet-triplet exchange) +crossing (ISC) +hv +3PS +10 +fluorescence +phosphorescence +1PS +★interactions, termed a Triplet-Triplet Exchange, can also cause the PS agent to decay back to its +singlet ground state, in turn raising the molecular oxygen to an excited singlet state (1O2*), +3PS* + 3O2 ® 1PS + 1O2*. +(3) +The excited singlet state of molecular oxygen can then cause damage to its surroundings +(Nilsson, Merkel, and Kearns 1972; Schmidt and Bodesheim1998). Due to the long lifetime of +the excited triplet PS, sufficient time is allowed for interactions with endogenous oxygen. For +this reason, the Type II pathway is generally accepted as the most common pathway to +photodamage in PDT (Jacques 1992; Henderson and Dougherty 1992; Hamblin and Mroz 2008; +Wainwright 1998; Ochsner 1997; Peavy 2002; Prasad 2003; Mata et al. 2006). + +The above introduction to PDT is given in a typical fashion as would be found in biological or +medical descriptions of PDT (Kearns and Khan 1969). The remainder of this paper is interested +in giving a more rigorous, quantum mechanical explanation of the process of photosensitization +in PDT. The Quantum Mechanical processes involved in activation of the Type II pathway to +photodamage will be covered in a simplified fashion so as to serve as an accessible description to +students and researchers who are new to PDT research. The subset of researchers responsible for +light delivery and light-tissue interactions in PDT may find this description useful. As such, the +quantum notation more familiar to physicists will be used moving forward. In particular, +quantum states of the PS and molecular oxygen will be treated as those of two-electron +molecules. Representation of photosensitization in PDT using this notation will be more familiar +to the students of quantum mechanics and physical chemistry while simultaneously appealing to +a rigorous sensibility by detailing the physical phenomena associated with each step of the +photosensitization process (Sec. 2). The addition of angular momentum between the two + +molecules will be employed in order to define the overall state of the system of molecules at +each step. The larger discussion will be summarize at the end of the paper (Sec. 3). + +QUANTUM TWO-ELECTRON MODEL +We will consider a basic quantum mechanical example of a generic PS interacting with +molecular oxygen as part of the desired Type II pathway to photodamage achieved in PDT. A +generic diagram of the photocativation of the PS and interactions with molecular oxygen are +depicted in Fig. 1. In particular, this work is concerned with describing the interactions between +the PS and molecular oxygen from the time the PS is excited via absorption of a photon through +the transfer of energy to molecular oxygen via a Triplet-Triplet Exchange. As such, all other +pathways will be ignored. + +Both the PS and molecular oxygen can be approximated as two-electron molecules. For example, +molecular oxygen forms via the covalent bond between two oxygen atoms, each needing a pair +of 2p electrons in order to fill the 2p shell (Turrens 2003). This pair of shared 2p electrons will +therefore be considered as those undergoing the transitions that follow during the PDT process. +The same assumption will be made of the PS, considering that the exchange of energy between +the PS and molecular oxygen comes in the form of electron exchange between a pair of two +electron systems. + +In quantum mechanics, we are concerned with eigenvalue problems where we can determine the +given set of eigenstates corresponding to a given set of eigenvalues. The eigenstate of a system +corresponds to the wavefunction of the system, or generically speaking, the state of the system. + +The eigenvalue corresponds to some physically measureable quantity, or characteristic of the +system, such as its energy, spin or angular momentum. As alluded to here, the characteristics of a +quantum state can have spatial and spin dependencies, such that their corresponding +wavefunctions must also incorporate spatial and spin states. We can separate the overall +wavefunction, Ψ(r⃗, m"), into the product of the two functional dependencies, +Ψ(r⃗, m") = Φ(r⃗)χ(m"), +(4) +where 𝑟⃗ represents the three dimensional spatial dependence of the spatial wave function Φ(𝑟⃗) +and 𝑚# is the spin quantum number, representing the spin dependence of the spin wavefunction +𝜒(𝑚#). In this context of atomic and molecular physics, the wavefunction represents the overall +state of an electron. Since electrons are Fermions, their overall wavefunctions must be +antisymmetric. + +When looking specifically at the context of PDT, we are dealing with systems of two electron +molecules. Therefore, the overall wavefunction (4) for both the PS and molecular oxygen must +be modified to reflect a two electron system, +Ψ(𝑟$333⃗, 𝑚#$; 𝑟%333⃗, 𝑚#%) = Φ(𝑟$333⃗, 𝑟%333⃗)𝜒(𝑚#$, 𝑚#%), +(5) +where the subscripts 1 and 2 represent the two separate electrons. + +From the requirement for electrons to have antisymmetric wavefunctions follows the definition +of the singlet and triplet states, which refer specifically to the spin wavefunction, 𝜒(𝑚#$, 𝑚#%), of +the two electron system. A combination of these two electrons in the spin state lead to a set of +three possible symmetric wavefunctions, + + + + + + +(6) + +where the + and - refer to the different combinations of spin up (𝑚# = + +$ +%) and spin down +(𝑚# = − +$ +%) states, respectively. This state is specifically called the (spin) triplet state because +there is a set three possible symmetric combinations for the two electron system. Similarly, there +is only a single antisymmetric combination of spins, +χ(m"$, m"%)  =   +$ +√%  ( χ±  −  χ∓ ), +(7) +which is therefore referred to as the (spin) singlet state (Sakurai 1994). + +One of the spin states from (6) or (7) can therefore be applied directly within the overall two +electron wavefunction (5) for either the PS or molecular oxygen. This leaves us to more +thoroughly define the spatial state of the system (Sakurai 1994). Resolving the spatial +wavefunction will be based upon the quantum mechanical rules for dealing with systems of +identical particles and the assumption that we can start from the model of the most simple of two +electron systems---the helium atom. Under this premise, the spatial wave function can undergo a +swap of electrons such that, +, +(8) +where the wavefunctions ψ$)) and ψ*+, refer to electrons in the ground and possible excited +states, respectively. The two states ψ$))(r$333⃗)ψ*+,(r% +333⃗) and ψ$))(r% +333⃗)ψ*+,(r$ +333⃗) account for a +change of state via exchange of identical particles—changing the configuration of the system by +exchanging the states of two electrons translates to a change of state. However, the total spatial +state (8) is the superposition of these two states, which can be gained either by the addition or +AChn +icbVFda9swFJ +XdbU3TfWTrY1/ +EQiGBEWyvH3sZ +BPrSxSWpBAZI +ytyIirJriQXg +vBP6Z/qW/9N5d +SlXZILgsM5+p +K56YFZ9oEwZP +n734+Gm/dA+ +/Pzl67fO9x8Tn +ZeK0DHJea5uUq +wpZ5KODTOc3h +SKYpFyOk1vL2t +9ek+VZrn8Z1YF +jQVeSJYxgo2j +ks4DGi1ZD91TY +lWVhL9eUdSHfy +HKFCY2rCzSd8r +YqKogStliweE +MokKzxIZBUL1 +9xtWcvHGRv02K +sQuf7TX9SD +1EwhkmnGwyCdc +FtEDagC5oaJZ1 +HNM9JKag0hGOt +Z2FQmNhiZRjh +tGqjUtMCk1u8o +DMHJRZUx3YdYw +VPHDOHWa7ckQ +au2fcdFgutVyJ +1ToHNUm9qNblL +m5Um+xNbJovSU +EleBmUlhyaH9 +U7gnClKDF85gI +li7q2QLHL3rj +NtV0I4eaXt8E +kGoTng9Pr0+6w +18TRAsfgJ+iBE +FyAIbgCIzAGxN +vz+l7k/fZb/s +A/8y9erL7X9By +B/8ofPgO408Hz + +�(~r1,~r2) = +1 +p +2 + + 100(~r1) nlm(~r2) ± 100(~r2) nlm(~r1) +� +ACYH +icbVFbS8MwGE +07L3Pepr7pS3A +Ik7nRDlFfhIEv +Pio4FdZR0uzbF +kzTmqTKCP2Tv +vngi7/EdFbw9k +HIyflOTpKTKOV +Mac97dzKwuL +ScnWltrq2vrFZ +39q+VUkmKfRpw +hN5HxEFnAnoa6 +Y53KcSBxuI +seLor+3RNIxRJ +xo2cpDGMyEWzM +KNGWCuvPAZ2y +Zhwa5edHxdTND +/E5DiKYMGodV +Y5LjShabUsCmr +Gx0FiLXGgHqU +23TzHzS9FO8et +Eret+rDQl8t2X +gtAjErLWlhve +B1vXvgv8EvQG +VdhfWXYJTQLAa +hKSdKDXwv1UND +pGaUgzXPFKSE +PpAJDCwUJAY1N +POAcnxgmREeJ9 +IOofGc/b7DkF +ipWRxZUz0VP3 +uFeR/vUGmx2dD +w0SaRD086Bx +rFOcJE2HjEJV +POZBYRKZu+K6Z +RIQrX9kyIE/e +T/4Lbsc/6Rx +fHzd6zTKOKtpD ++6iJfHSKeugSX +aE+oujNqThrzr +rz7lbdTXfrU+ +o65Z4d9KPc3Q9 +tyLKF +�(ms1, ms2) = +8 +> +< +> +: +�++ +1 +p +2(�+� + ��+) +��� + +subtraction of the two combinations. The addition of these two spatial states results in a +symmetric spatial wave function. Conversely, the subtraction of the two states results in an +antisymmetric spatial wave function. + +Now that the symmetric and antisymmetric representations of the spatial and spin states are +defined, we look to their possible combinations for the overall wavefunction of the two electron +system (Sakurai 1994). Since the electron wavefunction must have overall antisymmetry, the +antisymmetric spin singlet state (7) must pair with the symmetric spatial state (8), giving the +overall antisymmetric singlet state Ψ"-*.+/0(r$ +333⃗, m"$; r% +333⃗, m"%), +. +(9) +Similarly, the symmetric spin triplet (6) must pair to the antisymmetric spatial wavefunction (8), +giving the overall antisymmetric triplet state Ψ01-2+/0(r$333⃗, m"$; r% +333⃗, m"%), +. +(10) + +The system can be described in terms of the quantum numbers for orbital angular momentum (l), +magnetic quantum number (ml), spin angular momentum (s), and spin quantum number (ms). In +addition to the before mentioned quantum numbers comes the principle quantum number (n), +which is associated with the energy of the electron orbital. Starting with the principle quantum +number, which can take any nonzero, positive integer value (n = 1, 2, 3,...), we are able to define +the allowed values of the angular momentum and magnetic quantum number as follows (Liboff +1998): +ACrH +icbVFNi9swEJ +Xdr236sWl7GV +oKMSEBCsbS+F +hV56awqb3RTLG +FmRHbGy7EryQ +hD+df0HvfXfVE +kM2+7mgeDx5g3 +zNJM3Uhgbx3+ +C8MHDR4+fnDwd +PHv+4uXp8NXrS +1O3mvElq2WtVz +k1XArFl1ZYyV +eN5rTKJb/Kr7/ +s6lc3XBtRqwu7 +bXha0VKJQjBq +vZQNf5GFEZkzQ +pWS2w4+Ayk0ZQ +53jpif2rp51wH +JRVlKSIA0OzO +O425Mbjhzustw +dBCVrG7FeQSTY +2avH3Hj6DBQ +wrEiobcBhI7X +PDbYgxELbxrZN +pB9OeTycdRJAN +R/Es3gPuE9yT +EeqxyIa/ybpmb +cWVZIak+C4sa +mj2gomeTcgre +ENZde05Imnivp +Iqdsvu4P3XlD +UWv/lIW9+m+Ho +5Ux2yr3zoraj +blb24nHaklri0 ++pE6pLVfsMKh +oJdgadpeDtdC +cWbn1hDItfFZg +G+qPZf19B34J+ +O6X75PL+Qx/mJ +19Pxudj/t1nK +C36B0aI4w+onP +0FS3QErEgCr4F +q+BHOAsvwiRM +D9Yw6HveoP8QF +n8BhxjQNA= + singlet = +1 +p +2 + + 100(~r1) nlm(~r2) + 100(~r2) nlm(~r1) +� +⇥ 1 +p +2(�+� � ��+) +AC63 +icbVJNbxMxEP +UuHy3hoykcuVh +EoESrRLuhKlyQ +KnHhGCTSVopXK +6/jTaza3sX2V +os/wUuHECIK3 ++IG/8Gb3arQpu +RLD29eZ5M3Z +ecaZNHP8Jwjt3 +793f23/Qe/jo8 +ZOD/uHTU13Wit +A5KXmpznOsKW +eSzg0znJ5XimK +Rc3qWX7xv8meX +VGlWyk9mU9FU +4JVkBSPYeCo7D +AI0yzRrGKU+ +MgfAdfQVQoTGz +iLNKflbFT51D +OViu+QFUjTuLY +DdElJVa5LBm1p +OTimpyO4BjuE +Ht+h9qXaMqrFC +JZylrkVEGEet6 +HYJqaBNU+iFa +M6+vzZC1rxRF +DkY7HMh7ARjF +7VoHLnRFRy7q5 +5ZfxBP4m3A2y +DpwAB0Mcv6v9G +yJLWg0hCOtV4k +cWVSi5VhFPXQ +7WmFSYXeEUXH +krsZ0jt9q0cfO +mZJSxK5Y80cMv ++e8NiofVG5F4 +psFnrm7mG3JVb +1KZ4m1omq9pQS +dpGRc2hKWHz8H +DJFCWGbzARD +HvFZI19lsz/nv +0/BKSmyPfBqfT +SXI8Ofp4NDgZ +duvYB8/BCzAEC +XgDTsAHMANzQI +J18CX4FnwPRfg +1/BH+bKVh0N1 +5Bv6L8NdfLhrp +xw= + triplet = 1 +p +2 + + 100(~r1) nlm(~r2) � 100(~r2) nlm(~r1) +� +⇥ 1 +p +3 + +�++ + 1 +p +2(�+� + ��+) + ��� +� + +l = 0, 1, 2,..., (n-1) +(11) +ml = -l, -l + 1, ..., 0, 1, 2,..., +l. +(12) + +In addition to the limitations placed on the possible states of angular momentum and the +corresponding magnetic quantum numbers, there are quantum rules for the combining angular +momenta. The reasons for adding angular momenta at the quantum level could entail the need to +consider multiple particles within a system, or even the combination of different forms of +angular momenta. Both of these scenarios will affect our quantum mechanical discussion of +PDT. If we begin by defining a generic angular momentum term, j, two angular momenta (j1 and +j2) can be added to reach the following permitted values: +𝑗345 = |𝑗$ − 𝑗%| +(13) +𝑗367 = 𝑗$ + 𝑗%. +(14) +Based on these maximum and minimum values of total angular momenta, +𝑗 = |𝑗$ − 𝑗%|, … , 𝑗$ + 𝑗% +(15) +is the range of acceptable total angular momenta values (Liboff 1998). When dealing with the +addition of angular momenta, the range of +𝑚8 = −(𝑗$ + 𝑗%), , … ,0, … , 𝑗$ + 𝑗% +(16) +follows from (12) and (15). + +These allowed values for the quantum numbers are based on the solution for the spatial +wavefunction of the hydrogen atom in spherical coordinates by separating radial and angular +dependencies +Φ(𝑟, θ, ϕ) = 𝑅(𝑟)𝑌9 +3!(θ, ϕ), +(17) + +where R(r) represents the radial wavefunction and 𝑌9 +3!(θ, ϕ) the spherical harmonics. Of key +importance is the orthonormality of these special functions. Stating the wavefunction in terms of +the given quantum numbers via subscripts, Φ593!, taking the inner product of two such +wavefunctions (or equivalently, integrating the product of the two wave functions over all space) +returns +AΦ5"9"3! +"BΦ593!C = δ55"δ99"δ3!3! +", +(18) +where any given delta function takes the value of zero when the respective indices differ and +unity when they are the same (Liboff 1998). + +As an example illustrating the principle of conservation of energy, since the quantum number 𝑛 +is tied to the energy of a state, the inner product of the final (Φ5"9"3! +") and initial (Φ593!) states +will be zero if 𝑛: ≠ 𝑛, meaning the system cannot transition spontaneously and unperturbed +between the two states. The result will be unity if 𝑛: = 𝑛, such that the transition between the +two states does not violate the conservation of energy. The only way to change the energy of the +system between the initial and final states is to operate on them by doing work on the system, or +by letting the system itself do work. Since there is no operator acting on the energy of the states +in (18), the energy of the system must remain the same between the final and initial states. + +Similarly, the conservation of angular momentum is thus upheld in reference to the angular +momentum quantum number 𝑙 between the two states. A transition from Φ593! directly to +Φ5"9"3! +" is forbidden unless 𝑙: = 𝑙. This is of fundamental importance for the following +discussion, as we shall see that the angular momentum of the PS goes from 𝑙 = 0 to 𝑙 = 1 + +during activation in PDT. This transition is however perfectly acceptable as the PS is being acted +on by the incident light—by absorbing a photon (which carries an angular momentum of 𝑙 = 1), +the PS gains angular momentum in addition to energy. Later, this angular momentum will be +transferred to molecular oxygen along with energy in order to elicit a phototoxic effect. +Ultimately, when operating on one quantum state in order to cause it to transition to another +quantum state, the operator acting on the system will invoke a set of selection rules as to which +quantum transitions are allowed versus forbidden. + +One further note should be made on the notation employed. The spin angular momentum (s) and +spin quantum number (𝑚#) have been left out of the above conversation. However, as the name +suggests, spin angular momentum is another form of angular momentum, or at least behaves +quantum mechanically in the exact fashion as does angular momentum. The addition of spin +angular momenta therefore abides the general rules for addition of angular momenta (15). The +spin angular momentum of an electron is 𝑠 = +$ +%, such that the associated spin quantum numbers +are 𝑚# = ± +$ +%. Since both the PS and molecular oxygen of interest can each be considered two +electron systems, their respective spin angular momenta can take values of 𝑠 = 0, 1 via the rules +for addition of angular momenta. Therefore, the spin quantum numbers for each of these +individual molecules can take the values 𝑚# = 0, ±1. The photon carries no spin angular +momentum (𝑠 = 0, 𝑚# = 0). + +To begin our formal discussion of the quantum mechanical processes involved in PDT, we can +use the addition of angular momenta in order to determine the state of each of the molecules +using the condensed the notation + +|Ψ⟩3;9<=>9< = |𝑙, 𝑠; 𝑚9, 𝑚#⟩. +(19) +In this notation, the total state of the system is the product of the two molecular states + , +(20) +where again PS and O refer to the photosensitizer and molecular oxygen, respectively. + +Initially, both the PS and oxygen reside in their ground states—the PS in a spin singlet and the +molecular oxygen a spin triplet (Fig. 2a)—such that +|Ψ⟩4 = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(21) +Again using the addition of angular momentum, this time between the two molecules, the overall +initial state given in terms of the same quantum numbers becomes +|Ψ⟩4 = |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0, ±1⟩. +(22) + +When the PS absorbs light of the appropriate wavelength, it transitions to an excited singlet state +(Fig. 2b). Since the photon carries a quantum angular momentum of 𝑙 = 1, this transition +corresponds to an increase in orbital angular momentum of Δ𝑙 = +1 within the PS. The state of +molecular oxygen remains unchanged during this process. Upon absorption, the system +transitions to the state +|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A, +(23) +where again the addition of angular momentum between molecules gives the overall state +|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0, ±1⟩. +(24) + + +ACmn +icfVFbaxNBFJ +5dL63xluqD/p +wMCh9KGFXSlso +haLgBR+M2LSFT +FhmJyfp0LksM +2eFsOyP8q/45r +9xkgbURj1wmI/ +vO7c5p6y0CpR +lP5L0xs1btzc2 +73Tu3rv/4GF36 +9FpcLWXOJRO3 +9eioBaWRySIo +3nlUdhSo1n5eW +bhX72FX1Qzp7Q +vMKxETOrpkoK +ilTR/cajSnwQF +PfCzjQWTZgHQt +PCSzgCWJcHX1r +uSBkM69qnFrh +1tjYleuC86sE +6B0Ih2CKRrc7i +ye0/6r4v9DYo +Oj2sn62NFgH+Q +r02MoGRfc7nzh +ZG7QktQhlGcV +jRvhSUmNbYfX +ASshL8UMRxFaE +QcZN8vVtvAiMh +OYOh/dEizZ3z +MaYUKYmzJGkE +X4bq2IP+mjWqa +HowbZaua0MqrR +tNaAzlY3Akmy +qMkPY9ASK/irC +AvhBeS4jU7cQn +59S+vg9NX/Xy +v/t5t3e8vVrH +JnvKnrNtlrN9d +szeswEbMpk8SY +6St8m79Fn6Ov +2QfrwKTZNVzmP +2h6UnPwGetswx +| isystem = | iP S ⌦ | iO += |l, s; ml, msiP S ⌦ |l, s; ml, msiO + + +Figure 2. Energy level diagrams of the PDT process leading to the creation of singlet oxygen, +depicted in a HOMO-LUMO representation. a) The initial states of the PS and molecular +oxygen. b) The PS transitions to an excited spin singlet state via absorption. c) The PS transitions +to an excited spin triplet state via Intersystem Crossing. d) Triplet-Triplet electron exchange +between the PS and molecular oxygen leads to the final state of the system where the excited +spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms. + +Once in the excited state, the PS can either transition back to its ground state via fluorescence, or +undergo a nonradiative transition to a spin triplet state. The later process is desirable for the PDT +process, allowing the PS in its excited triplet state to interact with molecular oxygen. The +nonradiative process by which the PS moves from an excited spin singlet to an excited spin +triplet state is known as Intersystem Crossing, whereby the spin of the excited electron is no +��� +��� +�ν +���� +��� +���������� +���� +��� +��� +���� +������������ +�� +�� +�� +�� +�� � �� �� � �� � � ���� � ����� +�� � �� �� � ����� � � ���� � ����� +�� � �� �� � ����� � � ���� � �� +�� � �� �� � ����� � � ���� � �� + +longer paired to that of the electron in the ground state (Fig. 2c) (Bonnett 2000). Due to the +conservation of spin angular momentum, the transition from a singlet to a triplet state is a +quantum mechanically forbidden transition. However, Intersystem Crossing is made possible by +spin-orbit coupling, where the orbital and spin angular momenta are combined to give possible +total angular momenta given in (15). This nonradiative transition relies upon the overlap of the +vibrational states of the initial and final states of the electron (Bonnett 2000; Sakurai 1994; +Liboff 1998; Beljonne et al. 2001} Again, molecular oxygen remains in its ground state during +this process. Via Intersystem Crossing, the system transitions to the state +|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(25) +The addition of angular momentum between molecules gives the possible states +, +(26) +where the states 𝑠 = 0, 1, 2 are allowed along with their corresponding −𝑠 ≤ 𝑚# ≤ 𝑠 values. +Although the excited spin triplet state of the PS may phosphoresce back to its ground state, this +state has a long-lived life time such that interaction with molecular oxygen becomes more likely +(Hatz, Poulsen and Ogilby 2008). + +The PS in its excited triplet state interacts with the molecular oxygen in its ground state (spin +triplet) via a Triplet-Triplet Exchange of electrons (Fig. 2d). In this process, the excited electron +of the PS transitions to the molecular oxygen and the electron with matching spin in the ground +state of molecular oxygen transitions to the ground state of the PS. Along with this swapping of +electrons comes an exchange of energy, such that the PS returns to its ground (spin singlet) state +and the molecular oxygen transitions to an excited (spin singlet) state (Fig. 2d (Bonnett 2000; +ACxH +icjVFdSxtBFJ +1d26rRtmn76Mu +lQREqYTdIWygB +iyD6FrFRIROW2 +clNHJyZXWdmC +2GJP7JvxT/jZL +OCX5VemOHc+6 +583HTXArouh +vEC69ev1meW1 +sb+9t375oePp +zYrDMc+z2Rmzl +NmUQqNfSecxP +PcIFOpxLP0cn+ +un/1GY0Wmf7lp +jkPFJlqMBWfO +U0nzhnrV0Z4V1 +DA9kZiURyf7M+ +jCFlQSyG68A7Y +b/QCVlHLWjXZ +orsBzPrU+rX1A +daYLlaIBShtb8 +OWhHeJ/+Rfpf +3bpvNyl2jt3vZ +JmK2pHVcBTENe +gReroJc0/dJTx +QqF2XDJrB3GU +u2HJjBNc4qxBC +4s545dsgMPNV +Noh2U1hBlsem +YE48z4pR1U7H1 +HyZS1U5X6SsXc +hX2szcntEHhx +t+HpdB54VDzx +UHjQoLYD5RGA +mD3MmpB4wb4e8 +K/IZxp2fe8N +/Qvz4yU/Bacd +f23vHu+29rbr7 +1ghG+Qz2SYx+U +b2yCHpkT7hwc +9gEuTBVXgQytC +GxaI0DGrPJ/Ig +wutbqd7R9g= +| iISC =|l = 1, s = 0; ml = 0, ±1, ms = 0i ++ |l = 1, s = 1; ml = 0, ±1, ms = 0, ±1i ++ |l = 1, s = 2; ml = 0, ±1, ms = 0, ±1, ±2i + +Dexter 1953). The Triplet-Triplet Exchange is also referred to as a Dexter Exchange, based upon +the seminal work “A Theory of Sensitized Luminescence in Solids” written by D.L. Dexter, +which thoroughly explains this process. While the focus of this section is to simply give a +general description of the quantum states of the PS and molecular oxygen during the stages of +PDT, the reader is referred to Dexter’s work for a more rigorous and thorough description of the +exchange (Dexter 1953). + +Continuing with the same quantum numbers, the corresponding wave function for the system +becomes +|Ψ⟩EE = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩A. +(27) +The addition of angular momentum between molecules gives the state +|Ψ⟩EE = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩. +(28) +Given that the final state of this system must remain unchanged from that of (26) during this +process, we can conclude that after the PS underwent Intersystem Crossing the system must have +been in the first of those states listed in (26), +|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩. +(29) +From this conclusion, it follows that after the PS undergoes Intersystem Crossing, the system +must be described by the individual molecular states +|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(30) + +Pay particular attention to tracking the transfer and conservation of angular momentum +throughout the processes described. Both molecules are in their ground states initially. Upon +excitation of the PS via absorption of a photon, the angular momentum of the system increases. + +While the angular momentum of the PS does not change during Intersystem Crossing, it does +change in the final step as the angular momentum of the PS is transferred to that of the molecular +oxygen. To better demonstrate the point, the final step of Figure 2—the triplet-triplet exchange +between the PS and oxygen—is repeated again in Figure 3 along with the associated molecular +orbitals of oxygen and protoporphyrin IX (PpIX), a typical photosensitizer employed clinically +in PDT of cancers. The increase in angular momentum of molecular oxygen via the triplet-triplet +exchange is visually apparent. + +Figure 3. The HOMO-LUMO representations employed in the final step of Figure 2 are +represented here again, with the corresponding molecular orbitals of O2 and a common PS, +protoporphyrin-IX (PpIX). Molecular orbitals were generated via the Amsterdam Density +Functional program (te Velde et al. 2001). + +SUMMARY +������ +��� +����� +���� +������������ +�� � �� �� � ����� � � ���� � �� +�� � �� �� � ����� � � ���� � �� + +A summary of the states of the PS—O2 system based upon the transitions and physical processes +described would look as follows: +1. The PS and molecular oxygen begin in their ground states, the PS in a spin singlet and the +molecular oxygen a spin triplet, +|Ψ⟩4 = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(31) +2. Upon absorption of a photon, the PS is raised to an excited spin singlet state, while the +molecular oxygen goes unaffected, +|Ψ⟩6B# = |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(32) +3. The PS undergoes a nonradiative transition from the excited spin singlet to an excited spin +triplet via Intersystem Crossing, while the state of the molecular oxygen again remains +unchanged in its spin triplet ground state, +|Ψ⟩C@D = |𝑙 = 1, 𝑠 = 1; 𝑚9 = 0, ±1, 𝑚# = 0⟩?@ ⊗ |𝑙 = 0, 𝑠 = 1; 𝑚9 = 0, 𝑚# = 0⟩A. +(33) +4. Finally, the molecular oxygen is raised from its ground spin triplet state to an excited spin +singlet state as the PS simultaneously relaxes back from its excited spin triplet state to its spin +singlet ground state, +|Ψ⟩EE = |𝑙 = 0, 𝑠 = 0; 𝑚9 = 0, 𝑚# = 0⟩?@ ⊗ |𝑙 = 1, 𝑠 = 0; 𝑚9 = 0, ±1, 𝑚# = 0⟩A. +(34) + +Again, a summary of these processes and states is also depicted in Figure 2, where the overall +wavefunction of the system at each step is listed with the corresponding energy diagram. + +Simply put, energy from the excitation light is absorbed by the PS. Following some internal +transitions, the PS is then able to transfer the added energy to the molecular oxygen via Triplet- +Triplet Exchange. The final state of the PS—O2 system leaves the molecular oxygen in an + +excited state, ready to unleash oxidative stress on its immediate surroundings, ultimately causing +potential lethal photodamage as a result of biologic interactions that lead to activation of cellular +death pathways (Finkel and Holbrook 2000; Martindale and Holbrook 2002; Pisoschi and Pop +2015; Apel and Hirt 2004). + +ACKNOWLEDGMENTS +This publication was supported by an Institutional Development Award (IDeA) from the +National Institute of General Medical Sciences of the National Institutes of Health under grant +number P20 GM103418. The author would like to thank Henri J.F. Jansen for his advice while +working through the details of this paper. + +LITERATURE CITED +Apel, K. and Hirt, H. 2004. REACTIVE OXYGEN SPECIES: Metabolism, Oxidative Stress, +and Signal Transduction. Annual Review of Plant Biology 55:373-399. +Beljonne, D., Shuai, Z., Pourtois, G. and Bredas, J.L. 2001. Intersystem Crossing in Conjugated +Polymers: A Configuration Interaction Description. Journal of Physical Chemistry A +105(15):3899-3907. +Bonnett, R. 2000. Chemical Aspects of Photodynamic Therapy. Vol. 1. Advanced Chemistry +Texts, Gordon and Breach Science Publishers, Australia. +Dexter, D.L. 1953. A Theory of Sensitized Luminescence in Solids. The Journal of Chemical +Physics 21:836-850. +Finkel, T. and Holbrook, N.J. 2000. Oxidants, oxidative stress and the biology of ageing. Nature +408, 239-247. + +Hamblin, M.R. and Mroz, P. (Editors). 2008.Advances in Photodynamic Therapy: Basic, +Translational, and Clinical. Engineering in Medicine and Biology Series, Artech House, Boston. +Hasan, T., Moore, A.C.E. and Ortel, B. 2000. Photodynamic Theraphy of Cancer. pp. 489–502 in +Cancer Medicine, 5th edition. BC Decker Inc. +Hatz, S., Poulsen, L. and Ogilby, P.R. 2008. Time-resolved Singlet Oxygen Phosphorescence +Measurements from Photosensitized Experiments in Single Cells: Effects of Oxygen Diffusion +and Oxygen Concentration. Photochemistry and Photobiology 84:1284-1290. +Henderson, B. and Dougherty, T. (Editors). 1992. Photodynamic Therapy: Basic Principles and +Clinical Applications. Marcel Dekker, Inc., New York. +Jacques, S.L. 1992. Laser-tissue interactions: photochemical, photothermal, and +photomechanical. Surgical Clinics of North America 72:531-558. +Kautsky, H. 1939. Quenching of Luminescence by Oxygen. Transactions of the Faraday Society +35:216-219. +Kearns, D.R. and Khan, A.U. 1969. Sensitized Photooxygenation Reactions and the Role of +Singlet Oxygen. Photochemistry and Photobiology 10(3):193-210. +Keszthelyl, T., Weldon, D., Andersen, T.N., Poulsen, T.D., Mikkelsen, K.V. and Ogilby, P. +1999. Radiative Transitions of Singlet Oxygen: New Tools, New Techniques and New +Interpretations. Photochemistry and Photobiology 70:531-539. +Liboff, R.L. 1998. Introductory Quantum Mechanics, 3rd ed. Addison-Wesley, Reading, MA. +Martindale, J.L. and Holbrook, N.J. 2002. Cellular Response to Oxidative Stress: Signaling for +Suicide and Survival. Journal of Cellular Physiology 192:1-15. +Mata, J.E., Dyal, L.A., Rossi, V.M. and Gustafson, S.B. 2006. Solid Tumor Physiology as a + +Target for Nanomedicines. ch 14, pp. 1-19 in Nalwa, H.S. and Webster, T. (eds.), Cancer +Nanotechnology, American Scientific Publishers. +Nilsson, R., Merkel, P.B and Kearns, D.R. 1972. Unambiguous Evidence for the Participation of +Singlet Oxygen in Photodynamic Oxidation of Amino Acids. Photochemistry and Photobiology +16:117-124. +Ochsner, M. 1997. Photophysical and photobiological processes in the photodynamic therapy of +tumors. Journal of Photochemistry and Photobiology B: Biology 39:1-18. +Peavy, G.M. 2002. Lasers and laser—tissue interaction. Veterinary Clinics: Small Animal +Practice 32:517-534. +Pisoschi, A.M. and Pop, A. 2015. The role of antioxidants in the chemistry of oxidative stress: A +review. European Journal of Medicinal Chemistry 2015, 97:55. +Prasad, P.N. 2003. Introduction to Biophotonics. John Wiley and Sons, Inc., Hoboken, NJ. +Sakurai, J.J. 1994. Modern Quantum Mechanics, revised ed. Addison-Wesley, Reading, MA. +Schmidt, R. and Bodesheim, M. 1998. Radiationless Deactivation of the Second Excited Singlet +State of O2 in Solution. The Journal of Physical Chemistry A 102:4769-4774. +te Velde, G. T., Bickelhaupt, F. M., Baerends, E. J., Fonseca Guerra, C., van Gisbergen, S. J., +Snijders, J. G. and Ziegler, T. 2001. Chemistry with ADF. Journal of Computational Chemistry +22(9):931-967. +Turrens, J.F. 2003. Mitochondrial formation of reactive oxygen species. The Journal of +Physiology 552(2):335-344. +Wainwright, M. 1998. Photodynamic antimicrobial chemotherapy (PACT). Journal of +Antimicrobial Chemotherapy 42:13-28. + + +FIGURES + +Figure 1. The process leading to the preferred Type II path to photodamage starts when the PS is +excited by incident light of energy ℎ𝜈. The PS then relaxes via ISC to an excited triplet state, +whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer. + + +��� +��� +�ν +���� +��� +���������� +���� +��� +��� +���� +������������ +�� +�� +�� +�� +�� � �� �� � �� � � ���� � ����� +�� � �� �� � ����� � � ���� � ����� +�� � �� �� � ����� � � ���� � �� +�� � �� �� � ����� � � ���� � �� + +Type II +E +(Etransfer via +1PS +Intersystem +triplet-triplet exchange) +crossing (ISC) +hv +3PS +10 +fluorescence +phosphorescence +1PS +★Figure 2. Energy level diagrams of the PDT process leading to the creation of singlet oxygen, +depicted in a HOMO-LUMO representation. a) The initial states of the PS and molecular +oxygen. b) The PS transitions to an excited spin singlet state via absorption. c) The PS transitions +to an excited spin triplet state via Intersystem Crossing. d) Triplet-Triplet electron exchange +between the PS and molecular oxygen leads to the final state of the system where the excited +spin singlet state of oxygen is ready to impose oxidative damage in surrounding organisms. + + +Figure 3. The HOMO-LUMO representations employed in the final step of Figure 2 are +represented here again, with the corresponding molecular orbitals of O2 and a common PS, +protoporphyrin-IX (PpIX). Molecular orbitals were generated via the Amsterdam Density +Functional program (te Velde et al. 2001). +������ +��� +����� +���� +������������ +�� � �� �� � ����� � � ���� � �� +�� � �� �� � ����� � � ���� � �� + diff --git a/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..90353bd500b8d00b83272e7282e25591701917e8 --- /dev/null +++ b/ddE2T4oBgHgl3EQfGAap/content/tmp_files/load_file.txt @@ -0,0 +1,803 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf,len=802 +page_content='A Quantum Mechanical Description of Photosensitization in Photodynamic Therapy using a Two-Electron Molecule Approximation Vincent M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Rossi Washburn University Department of Physics & Astronomy, Topeka, KS 66621 vincent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content='rossi@washburn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content='edu ABSTRACT A fundamental, Quantum Mechanical description of photoactivation of a generic photosensitizer and the ensuing transfer of energy to endogenous oxygen as part of the Type II pathway to photodamage during photodynamic therapy (PDT) is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The PS and molecular oxygen are approximated as two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Conservation of energy and of angular momenta of the two molecule system are abided via selection rules throughout the four-stage process, including initial states, absorption of a photon by the PS, conversion of the PS to an excited spin triplet via intersystem crossing (ISC), and the transition of molecular oxygen to an excited spin singlet state via a Triplet-Triplet Exchange of electrons with the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The provided description of photosensitization will provide students and researchers with a fundamental introduction to PDT, while offering the broader population of Quantum Mechanics and Physical Chemistry students an advanced example of quantum systems in an applied, medical context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Keywords: Photosensitization, Photodynamic Therapy (PDT), photochemistry, Dexter Exchange, Triplet-Triplet Exchange INTRODUCTION Photodynamic therapy (PDT) is a localized and selective therapy that operates on principles included under the generic classifications of photobiology, photochemistry and photophysics (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Henderson and Dougherty 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Hamblin and Mroz 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Bonnett 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Hasan, Moore and Ortel 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' While PDT has found its broadest application and research as a cancer therapy, it has also been used for antimicrobial therapy for combating antibiotic resistant strains (Wainwright 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Three ingredients are required for PDT—a photosensitizer (PS), light, and oxygen—in order to induce photochemical damage to its targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' In short, the PS is administered to the patient and after an appropriate time interval, the targeted site is illuminated with light of appropriate wavelength to be absorbed by the PS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Upon excitation by light of appropriate energy, the excited PS interacts with endogenous molecular oxygen in order to create reactive oxygen species (ROS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The interactions between the excited PS and endogenous molecular oxygen to generate ROS has been recognized and developed over some time (Kautsky 1939;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Keszthelyl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' These ROS then interact with their immediate environment, creating oxidative damage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Targeted cancer cells or bacteria are eliminated once they reach a threshold of damage via ROS (Nilsson, Merkel, and Kearns 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Schmidt and Bodesheim1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=" Absorbed photons transfer discrete energies to the PS, raising it from the singlet ground state (1PS) to an excited singlet state (1PS*), 1PS + hn ® 1PS*, (1) where the product of Planck's constant (h) and the frequency of light absorbed (n) represents the addition of energy via absorption (Fig." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The PS may then fluoresce back to its ground state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Preferably, the PS in its excited singlet state will transition to its excited triplet state (3PS*) through Intersystem Crossing (ISC), 1PS* ® 3PS*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' (2) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The process leading to the preferred Type II path to photodamage starts when the PS is excited by incident light of energy ℎν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The PS then relaxes via ISC to an excited triplet state, whereby it can transfer energy to molecular oxygen via a triplet-triplet electron transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Once in the excited triplet state, the photosensitizer may then decay back to its ground state through one of two mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The first of which, called the Type I pathway to photodamage in PDT, involves the PS in its excited triplet state interacting with the surroundings, thereby losing energy and creating free radicals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The resulting free radicals may then react with endogenous oxygen to form cytotoxic species such as 𝑂𝐻!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Wainwright 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Ochsner 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Peavy 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Prasad 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Mata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The Type II pathway to photodamage in PDT entails a direct interaction between the PS in its excited triplet state and endogenous molecular oxygen in its triplet ground state (3O2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Such Type II E (Etransfer via 1PS Intersystem triplet-triplet exchange) crossing (ISC) hv 3PS 10 fluorescence phosphorescence 1PS ★interactions, termed a Triplet-Triplet Exchange, can also cause the PS agent to decay back to its singlet ground state, in turn raising the molecular oxygen to an excited singlet state (1O2*), 3PS* + 3O2 ® 1PS + 1O2*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' (3) The excited singlet state of molecular oxygen can then cause damage to its surroundings (Nilsson, Merkel, and Kearns 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Schmidt and Bodesheim1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Due to the long lifetime of the excited triplet PS, sufficient time is allowed for interactions with endogenous oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' For this reason, the Type II pathway is generally accepted as the most common pathway to photodamage in PDT (Jacques 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Henderson and Dougherty 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Hamblin and Mroz 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Wainwright 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Ochsner 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Peavy 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Prasad 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Mata et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The above introduction to PDT is given in a typical fashion as would be found in biological or medical descriptions of PDT (Kearns and Khan 1969).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The remainder of this paper is interested in giving a more rigorous, quantum mechanical explanation of the process of photosensitization in PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The Quantum Mechanical processes involved in activation of the Type II pathway to photodamage will be covered in a simplified fashion so as to serve as an accessible description to students and researchers who are new to PDT research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The subset of researchers responsible for light delivery and light-tissue interactions in PDT may find this description useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' As such, the quantum notation more familiar to physicists will be used moving forward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' In particular, quantum states of the PS and molecular oxygen will be treated as those of two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Representation of photosensitization in PDT using this notation will be more familiar to the students of quantum mechanics and physical chemistry while simultaneously appealing to a rigorous sensibility by detailing the physical phenomena associated with each step of the photosensitization process (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The addition of angular momentum between the two molecules will be employed in order to define the overall state of the system of molecules at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The larger discussion will be summarize at the end of the paper (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' QUANTUM TWO-ELECTRON MODEL We will consider a basic quantum mechanical example of a generic PS interacting with molecular oxygen as part of the desired Type II pathway to photodamage achieved in PDT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' A generic diagram of the photocativation of the PS and interactions with molecular oxygen are depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' In particular, this work is concerned with describing the interactions between the PS and molecular oxygen from the time the PS is excited via absorption of a photon through the transfer of energy to molecular oxygen via a Triplet-Triplet Exchange.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' As such, all other pathways will be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Both the PS and molecular oxygen can be approximated as two-electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' For example, molecular oxygen forms via the covalent bond between two oxygen atoms, each needing a pair of 2p electrons in order to fill the 2p shell (Turrens 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' This pair of shared 2p electrons will therefore be considered as those undergoing the transitions that follow during the PDT process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The same assumption will be made of the PS, considering that the exchange of energy between the PS and molecular oxygen comes in the form of electron exchange between a pair of two electron systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' In quantum mechanics, we are concerned with eigenvalue problems where we can determine the given set of eigenstates corresponding to a given set of eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The eigenstate of a system corresponds to the wavefunction of the system, or generically speaking, the state of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The eigenvalue corresponds to some physically measureable quantity, or characteristic of the system, such as its energy, spin or angular momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' As alluded to here, the characteristics of a quantum state can have spatial and spin dependencies, such that their corresponding wavefunctions must also incorporate spatial and spin states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' We can separate the overall wavefunction, Ψ(r⃗, m"), into the product of the two functional dependencies, Ψ(r⃗, m") = Φ(r⃗)χ(m"), (4) where 𝑟⃗ represents the three dimensional spatial dependence of the spatial wave function Φ(𝑟⃗) and 𝑚# is the spin quantum number, representing the spin dependence of the spin wavefunction 𝜒(𝑚#).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' In this context of atomic and molecular physics, the wavefunction represents the overall state of an electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Since electrons are Fermions, their overall wavefunctions must be antisymmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' When looking specifically at the context of PDT, we are dealing with systems of two electron molecules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Therefore, the overall wavefunction (4) for both the PS and molecular oxygen must be modified to reflect a two electron system, Ψ(𝑟$333⃗, 𝑚#$;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' 𝑟%333⃗, 𝑚#%) = Φ(𝑟$333⃗, 𝑟%333⃗)𝜒(𝑚#$, 𝑚#%), (5) where the subscripts 1 and 2 represent the two separate electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' From the requirement for electrons to have antisymmetric wavefunctions follows the definition of the singlet and triplet states, which refer specifically to the spin wavefunction, 𝜒(𝑚#$, 𝑚#%), of the two electron system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' A combination of these two electrons in the spin state lead to a set of three possible symmetric wavefunctions, (6) where the + and - refer to the different combinations of spin up (𝑚# = + $ %) and spin down (𝑚# = − $ %) states, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' This state is specifically called the (spin) triplet state because there is a set three possible symmetric combinations for the two electron system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Similarly, there is only a single antisymmetric combination of spins, χ(m"$, m"%) = $ √% ( χ± − χ∓ ), (7) which is therefore referred to as the (spin) singlet state (Sakurai 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' One of the spin states from (6) or (7) can therefore be applied directly within the overall two electron wavefunction (5) for either the PS or molecular oxygen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' This leaves us to more thoroughly define the spatial state of the system (Sakurai 1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Resolving the spatial wavefunction will be based upon the quantum mechanical rules for dealing with systems of identical particles and the assumption that we can start from the model of the most simple of two electron systems---the helium atom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' Under this premise, the spatial wave function can undergo a swap of electrons such that, , (8) where the wavefunctions ψ$)) and ψ*+, refer to electrons in the ground and possible excited states, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' The two states ψ$))(r$333⃗)ψ*+,(r% 333⃗) and ψ$))(r% 333⃗)ψ*+,(r$ 333⃗) account for a change of state via exchange of identical particles—changing the configuration of the system by exchanging the states of two electrons translates to a change of state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' However,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' the total spatial state (8) is the superposition of these two states,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' which can be gained either by the addition or ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ddE2T4oBgHgl3EQfGAap/content/2301.03653v1.pdf'} +page_content=' +0.0025 for the first state of the beam. (B) We zoom in on only the part of the electron bunch which has the largest σz profile +and show it from two angles. (C) Fields from only a single (x, y) slice of the 3D volume are shown at two different angles. + + 0. Then we can achieve +R(T) ≲ inf +ǫ>0 ǫT + +� +N(ǫ)T. +(5) +To achieve this, simply compute an ǫ-covering of H and let the leader play no-regret algo- +rithms on the ǫ-covering set. Note that although the covering is constructed for pair of actions +(a, b) ∈ Aǫ×Bǫ, it suffices for the leader to run no-regret algorithms on actions Aǫ. The detailed +algorithm and proof are given in Appendix A.2. +This upper bound is achieved when the leader does not even utilize the observations of the +follower’s responses. Indeed, in the worst case (e.g., in Example 3.2), the responses will not +provide information. +As a corollary, in the linear regime with HΘ,φ, the covering number is N(ǫ) = N(Θ, ǫ, ∥·∥) ≤ +exp +� +O +� +d log 1 +ǫ +�� +Wainwright [2019]. +Choosing ǫ ≍ T −1/(d+2), Theorem 3.3 reduces to the +following upper bound in the linearly parameterized case. +Corollary 3.4. In the linear case, we can achieve R(T) ≲ T (d+1)/(d+2). +In other words, the sample complexity for achieving average regret equal to ǫ is upper +bounded by +� +O +� +1/ǫ +�d+2� +. +This upper bound is agnostic to any structural property of the +feature function φ, such as smoothness or even continuity. +6 + +4 +UCB with side observations +Although the worst-case sample complexity for linear Stackelberg games is exponential, it is +possible to obtain a fine-grained analysis and improved rate for the family HΘ,φ when φ is +better structured. A natural choice of algorithm for the leader is some variant of UCB that +incorporates observations of the follower’s actions. In this section, we will describe a general +recipe for a family of UCB algorithms to incorporate the side information as well as the challenge +in their design. +4.1 +Algorithm description +We consider the following variant of UCB that uses the follower’s responses as side information +to improve the confidence set. +Algorithm 1 UCB with side information from expert +Input: Regression oracles Reg(b) and Reg(r) on reward and response, {αt}t∈[T], {βt}t∈[T] +for t = 1 to T do +Compute h(b) +t += Reg(b)(ˆb1, . . . ,ˆbt−1) and h(r) +t += Reg(r)(r1, . . . , rt−1) +Set H(b) +t +:= {h : �t−1 +i=1 ∥b∗ +h(ai) − b∗ +h(b) +t +(ai)∥2 ≤ α2 +t } +Set H(r) +t +:= {h : �t−1 +i=1 +� +h(ai) − h +(r) +t (ai) +�2 ≤ β2 +t } +Construct confidence set Ht = H(b) +t +∩ H(r) +t +Take action at ∈ arg maxa∈A suph∈Ht h(a) +Observe (noisy) reward rt and response ˆbt +end for +Remark 4.1. The regression oracles and the sequences {αt}t∈[T], {βt}t∈[T] must be chosen ap- +propriately so that the following condition holds: Given an error tolerance δ ∈ (0, 1), we require +h⋆ ∈ �T +t=1 Ht with probability at least 1 − δ. +Remark 4.2. A common choice for Reg(b) and Reg(r) is the least-squares regression oracle that +computes +h(b) +t +∈ arg min +h∈H +t−1 +� +i=1 +∥b∗ +h(ai) − ˆbi∥2 +(6) +and +h(r) +t +∈ arg min +h∈H +t−1 +� +i=1 +(h(ai) − ri)2. +(7) +When the least-squares computation becomes infeasible under complex response-reward struc- +tures (this is common for (6)), custom oracles need to be designed. A more intricate approach +may be to jointly construct the estimate using both {ˆbτ}τ∈[t−1] and {rτ}τ∈[t−1]. We leave it for +future research to study systematic designs of the oracles and the confidence sets. +Remark 4.3. When the responses are unobserved or ignored (e.g., by choosing αt = ∞), +Algorithm 1 reduces to the classic Eluder UCB using the least-squares (reward) oracle with +Ht = H(r) +t +Russo and Van Roy [2013]. +The choices of {αt}t∈N and {βt}t∈N can pose another challenge. An naive attempt to get a +generic upper bound on αt is to use a covering argument as in Russo and Van Roy [2013] using +the following measurement between two functions h, h′ ∈ H: d(b)(h, h′) = supa ∥b∗ +h(a) − b∗ +h′(a)∥. +But note that this does not necessarily define a norm, and further the covering number of H in +this sense can be infinite when the best response is discontinuous in the leader’s action a. Thus, +such an approach is often not useful and one may have to determine αt on a per instance basis. +7 + +4.2 +Examples +While Theorem 3.1 shows that the involvement of the omniscient follower can lead to “curse +of expertise,” a stark deterioration in the sample complexity, there are many scenarios where +the leader’s observation of the follower’s responses can expedite learning significantly. In this +section, we will explore a few such examples. +4.2.1 +An imitation-based example +Let us consider a setting where the leader achieves efficient learning through imitation. Heuris- +tically, imitation arises when the optimal action for the leader is equal to the best response for +the omniscient follower or a function of it. This may capture, for instance, real-world robotics +applications where the actions of the robot and the human expert are exchangeable and the +true goal can be easily inferred from the expert’s action. A simple scenario is when the robot +and the human expert are supposed to carry out the same task perfectly, in which case the +robot should simply treat the expert as a role model and imitate. The following is a concrete +example. +Example 4.4. Let A = B = Θ = Sd−1 (or Bd equivalently)2. Consider the linearly parameter- +ized function class HΘ,φ with feature function +φ(a, b) = a + b. +(8) +Here, the optimal response b∗ +θ ≡ θ is independent of a, and hθ(a) = θ · a + 1. +Construction of confidence sets. +The (noisy) observations of the follower’s best responses +simplify the problem into an imitation learning task. A simple oracle for the best-response obser- +vations is to take the A-projected empirical average of responses, i.e., θ(b) +t += ΠA +� 1 +t−1 +�t−1 +i=1 ˆbi +� +.3 +The response-based confidence set reduces to +Θ(b) +t += +� +θ ∈ Θ +���∥θ − θ(b) +t ∥ ≤ +αt +√t − 1 +� +. +Standard sub-Gaussian concentration results suggest that the (Euclidean) radius of this confi- +dence set shrinks at a rate of t−1/2. +Lemma 4.5. To ensure θ⋆ ∈ � +t∈[T] Θt with probability at least 1 − δ, it suffices to choose +αt = Θ +� +σb +� +d + log T +δ +� +. +UCB chooses actions on Sd−1 increasingly close to the empirical estimate θ(b) +t .4 The regret +bound follows from these choices of confidence sets. +Proposition 4.6. In Example 4.4, UCB achieves a regret bound +RUCB(T) ≲ σ2 +b log T · (d + log T). +(9) +In other words, the average regret decays at a rate of �O(σ2 +bd/T). This has also been analyzed +in the setting of imitation learning [Rajaraman et al., 2021], and the results are consistent. +2While it is customary to consider Θ = Bd, we will observe below that the imitation-based algorithm does not +crucially rely on ∥θ⋆∥ and only incurs smaller regret if ∥θ⋆∥ < 1. This is because the algorithm asymptotically +relies solely on the response observations, which are invariant under scaling of θ⋆. +It is also without loss of +generality to restrict all actions to the sphere. +3Define the projection of y ∈ Rd onto a closed set X ⊆ Rd as ΠX(y) := arg minx∈X ∥y − x∥, breaking ties +arbitrarily when the minimizer is not unique. +4Even simpler, the leader can play the A-projected empirical average of responses. +Under our choice of +constant α, the analysis will be the same, with the result differ by at most a constant factor. +8 + +Remark 4.7. When the follower’s responses are unobserved, this is simply a linear bandit, +where the minimax regret is Ω(σbd +√ +T) ≫ O(σ2 +bd log2 T). This indicates the value of the bt +observations. When the follower’s response is noiseless, one can see that a single sample suffices +to find the optimal response since one always observes b⋆ +θ = θ. +Remark 4.8. Note the gap in the Θ(log T) regret when the response observations are used and the +Θ( +√ +T) regret when they are ignored or unavailable, showing the value of those response observa- +tions. In fact, it is easy to modify this example slightly (e.g., taking φ(a, b) = max{|θ⊤a|, ∆}b for +some ∆ ∈ (0, 1)) to create an even larger gap: When the leader uses the response observations, +the regret is � +O(d log T) with sample complexity � +O +� +d log 1 +ǫ +� +; When the response observations are +unavailable, the sample complexity increases to Ω(ǫ−d). +4.2.2 +Expert-guided exploration +In many scenarios, the omniscient follower’s actions may not directly reveal the exact state of +the world but still provide crucial information. The next example illustrates a simple setting +where the follower’s response can significantly reduce the sample complexity. +Example 4.9. Let A = B = Sd−1 and +Θ = {(θa, θb) ∈ Sd−1 × Sd−1|θa · θb ≥ ζ} +for some ζ ∈ (0, 1). Consider the parameterized family of functions HΘ = {hθ|θ ∈ Θ} where +hθ(a, b) = ReLU(θa · a − ∆) + θb · b, +for some ∆ ∈ (0, 1). For simplicity, we will assume that the response observations are noiseless +(i.e., σb = 0), although the noisy case can be analyzed analogously. +Confidences sets. +The best response is b∗ +θ ≡ θb, again independent of the leader’s action. +Upon observing b1 = θb, the leader should construct confidence sets Θ(b) +t += {θa ∈ Sd−1|θa · b1 ≥ +ζ} × {b1}, while Θ(r) +t +is chosen as in linear UCB. As a result, all subsequent actions the leader +takes must fall into +A1 := {a ∈ A|a · b1 ≥ ζ}. +(10) +This refinement of the action set will reduce the sample complexity, and depending on the size +of ζ relative to ∆, the reduction can be significant. +Strong reduction. +When 1 − ζ ≤ (1 − ∆)/4, the leader learns that θa · b1 ≥ ζ. In particular, +any action a ∈ A1 must satisfy +θa · a = 2 − ∥θa − a∥2 +2 +≥ 2 − (∥θa − b1∥ + ∥a − b1∥)2 +2 +≥ 2 − (2√2 − 2ζ)2 +2 += 1 − 4(1 − ζ) ≥ ∆, +(11) +and thus h(a) = θa · a − ∆ + 1 behaves as a linear function within A1. By playing UCB within +A1, the leader reduces the problem to a linear bandit instance and thus achieves the following +regret bound. +Proposition 4.10. Assume 1 − ζ ≤ (1 − ∆)/4 in Example 4.9. UCB achieves +RUCB(T) ≤ �O(d +√ +T). +(12) +This leads to a sample complexity of � +O(d2/ǫ2), in contrast to the exponential sample com- +plexity exp(O(d log 1 +ǫ)) if the responses were unobserved. Information from the follower’s re- +sponse guides the leader’s exploration to the well conditioned part of the action space. Given the +Ω(d +√ +T) sample complexity of linear bandits, the upper bound (12) is tight (up to logarithmic +terms). +9 + +Weak reduction. +When ζ is small relative to ∆, the problem does not immediately reduce +to a linear bandit, but we have the following improved upper bound. +Proposition 4.11. There exists an algorithm Alg that achieves +RAlg(T) ≤ O +� +(Cd +ζ T d+1) +1 +d+2� +, +(13) +where Cζ := +� +1 − ζ2 ∈ (0, 1). +This bound improves as ζ decreases. The sample complexity is therefore �O(Cd +ζ ǫ−d−2), a +Cd +ζ reduction compared with the original complexity without observing the responses in Corol- +lary 3.4. +Since the reduced problem is still a ReLU bandit, UCB will not be suitable. Instead, (13) +can be achieved through discretization of A1 as the upper bound in Theorem 3.3. +5 +Beyond UCB +Although the UCB algorithm gives a near-optimal rate in most of the above examples. We +also provide two cases where UCB fails to achieve the optimal rate. This necessitates a tailored +algorithm design in specific settings. +5.1 +Nonlinear (polynomial) family +UCB is known to fail to achieve the optimal rate in the case of the polynomial bandit family +Huang et al. [2021], where the reward is a polynomial activation on top of a linear family. We +construct an example which utilizes the structure of the polynomial bandit, formally defined +below. +Example 5.1 (Polynomial bandit). Consider the convex function f(x) = x2k for some k ∈ Z+. +Let +A = Bd−1, B = [−1, 1], Θ = Bd−1 × {1}, +(14) +and +φ(a, b) = (2kba, −f ∗(2kb)), +(15) +where f ∗ is the convex conjugate of f. Consider the nonlinearly parameterized family +HΘ := {hθ(a, b) = f(θ · φ(a, b)) | θ ∈ Θ}. +(16) +By properties of the convex conjugate, +hθ(a) = f(θ−d · a) = (θ−d · a)2k +(17) +with the best response +b∗ +θ(a) = arg max +−1≤b≤1 +2kbθ−d · a − f ∗(2kb) += f ′(θ−d · a) +2k += (θ−d · a)2k−1 ∈ [−1, 1]. +This observation allows us to apply results on polynomial bandits Huang et al. [2021]. +10 + +Response-regret structure. +Observe the following properties of the best response function +in Example 5.1. +1. The expected reward is a function of the best response, independent of the true parameter. +Namely, +hθ(a) = b∗ +θ(a) +2k +2k−1 . +(18) +This mapping is Lipschitz: +��hθ(a) − hθ(a′) +�� ≤ +2k +2k − 1 +��b∗ +θ(a) − b∗ +θ(a′) +��, +(19) +and further +arg max +a∈A +b∗ +θ(a) = θ ∈ arg max +a∈A +hθ(a), +(20) +with both maxima being 1. +2. The response observation, as a degree 2k − 1 polynomial, is more informative than the +reward observation, a degree 2k polynomial, when the noise levels are the same and θ−d ·a +is small. +Based on these two observations, the leader may view the response bt as a proxy reward and +aim to minimize the proxy regret +�R(T) := +T +� +t=1 +1 − b∗ +θ(at). +(21) +This is consistent with minimizing the true regret R(T), which differs from the proxy regret +�R(T) by at most a constant factor by (19). +Regret bound. +Using the response observations exclusively to minimize the proxy regret +�R(T) = �T +t=1 1 − b∗ +θ(at), the leader reduces her task to a polynomial bandit problem with a +degree 2k − 1 polynomial activation function. By (19), we may focus on bounding the proxy +regret. Corollary 3.16 from Huang et al. [2021] suggests that +�R(T) ≤ �O( +√ +d2k−1T), +(22) +or equivalently the sample complexity is � +O(d2k−1/ǫ2) for achieving ǫ average proxy regret. The +following bound on the true regret follows from (19) and (22). +Proposition 5.2. In example 5.1, there exists an algorithm Alg, using the response observations +exclusively, that achieves +RAlg(T) ≤ O( +√ +d2k−1T). +(23) +Proposition 5.2 suggests an � +O(d2k−1/ǫ2) sample complexity. For instance, the leader can +achieve this regret with the zeroth-order algorithm proposed in Huang et al. [2021, Algorithm 6]. +Remark 5.3 (Lower bound). Since the reward observations have a higher signal-to-noise-ratio, +we should expect that the sample complexity of Example 5.1 to be the same order as the sample +complexity of achieving ǫ average regret in a degree 2k − 1 polynomial bandit. Huang et al. +[2021] shows that this is lower bounded by Ω(d2k−1/ǫ2). Thus, (23) is essentially optimal. +Remark 5.4 (Benefit of observing responses). If the leader does not observe the responses, the +problem is equivalent to a degree 2k polynomial bandit. The optimal regret without observing +the experts actions will lead to an �O(d2k/ǫ2) sample complexity. Thus, the response observations +contribute to shaving of a factor of d, which can be significant when the dimensionality is high. +Remark 5.5 (Suboptimality of UCB). Using the traditional Eluder UCB algorithm leads to a +suboptimal sample complexity of � +O(d2k/ǫ2) when the leader solely uses the response observa- +tions. Still, this is a factor d improvement compared to what she can achieve with UCB without +the response observations. +11 + +5.2 +Failure of the optimism principle +The next example is adapted from the ReLU bandit in Example 3.2, and shows that optimism- +based method can have dramatic suboptimality in certain problems. +Example 5.6. Let A = Bd−1, B = Bd−1 × [0, 1], and +Θ = {(θ−d, θd) | θ−d ∈ Bd, θd = 1 − ∆} +(24) +for some ∆ ∈ (0, 1). Consider the linear family HΘ,φ with +φ(a, b) = ∥a∥((1 − bd)a, bd − ∥b−d∥) + 1 − ∥a∥ +2 +(b−d, 0). +(25) +For any θ ∈ Θ with θ−d ∈ Sd−1, the optimal action for the leader is θ−d, with the follower +best responding (0, 0) and achieving unit expected reward. +When ∥a∥ = 1, this function behaves exactly as in Example 3.2, where b∗ +θ(a) = (0, 1) +whenever θ−d · a < 1 − ∆; When a = 0, the best response is b∗ +θ(0) = (θ−d, bd). Thus, if the +response observations are noiseless, the leader learns the true parameter and hence the optimal +action in one round by playing a1 = 0. +However, any optimism-based method such as UCB will not achieve such efficient learning, +even when the response are noiselessly observed. It is straightforward to verify that, for any +action a with ∥a∥ < 1, the optimistic reward satisfies +sup +θ∈Θ +hθ(a) < 1. +(26) +Thus, as long as the confidence set contains some θ with θ−d ∈ Sd−1, which holds under our +initial condition, optimism causes the leader to only take actions a ∈ Sd−1, reducing the problem +to the worst-case Example 3.2. +6 +Conclusions +We have studied a model of online learning in decentralized cooperative Stackelberg games. We +showed that, even with an omniscient follower who always best responds (myopically), the worst +case sample complexity for a linear family can be as large as exp(Θ(d log 1 +ǫ)). This “curse of +expertise” highlights the challenge caused by miscoordinated exploration. This also raises the +question of how a non-myopic expert follower should respond to the leader’s actions (without +knowing the leader’s exact algorithm) to expedite their learning and maximize their long-term +reward. +We considered the UCB-type algorithm that incorporates response observations. +A few +examples of various hardness were considered, ranging from efficient learning through imitation +and guided exploration to the worst-case linear family example with an exponential sample +complexity. +Besides the examples considered in the paper, there are numerous scenarios where the roles +of the leader and the follower are more complex to reason about. This poses unique challenges +for both the learning process of the leader and the subsequent analysis of regret, indicating +a fertile ground for future research. Specifically, our current template of Algorithm 1 requires +designing the confidence sets based on the specific response-reward structure of each problem. It +remains open to find a general design (or prove the lack thereof) that systematically synthesizes +the response and reward observations. +A general framework of analysis that can provide a +unified yet sharp upper bound on the examples is also valuable. +12 + +References +Yasin Abbasi-Yadkori, D´avid P´al, and Csaba Szepesv´ari. Improved algorithms for linear stochas- +tic bandits. Advances in neural information processing systems, 24, 2011. +Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. +Finite-time analysis of the multiarmed +bandit problem. Machine learning, 47(2):235–256, 2002. +Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong. Sample-efficient learning of Stackelberg +equilibria in general-sum games. Advances in Neural Information Processing Systems, 34: +25799–25811, 2021. +Vincent Conitzer and Tuomas Sandholm. Computing the optimal strategy to commit to. In +Proceedings of the 7th ACM Conference on Electronic Commerce, pages 82–90, 2006. +Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. Strategic +classification from revealed preferences. +In Proceedings of the 2018 ACM Conference on +Economics and Computation, pages 55–70, 2018. +Kefan Dong, Jiaqi Yang, and Tengyu Ma. Provable model-based nonlinear bandit and reinforce- +ment learning: Shelve optimism, embrace virtual curvature. Advances in Neural Information +Processing Systems, 34:26168–26182, 2021. +Jacques Ferber and Gerhard Weiss. Multi-agent systems: an introduction to distributed artificial +intelligence, volume 1. Addison-wesley Reading, 1999. +Jerzy Filar and Koos Vrieze. Competitive Markov decision processes. Springer Science & Busi- +ness Media, 2012. +Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin. The statistical complexity +of interactive decision making. arXiv preprint arXiv:2112.13487, 2021. +Matthias Gerstgrasser and David C Parkes. Oracles & followers: Stackelberg equilibria in deep +multi-agent reinforcement learning. arXiv preprint arXiv:2210.11942, 2022. +Michael A Goodrich, Alan C Schultz, et al. Human–robot interaction: a survey. Foundations +and Trends® in Human–Computer Interaction, 1(3):203–275, 2008. +Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. Strategic classi- +fication. In Proceedings of the 2016 ACM conference on Innovations in Theoretical Computer +Science, pages 111–122, 2016. +Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan. Adaptive contract design +for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems. +In +Proceedings of the fifteenth ACM conference on Economics and computation, pages 359–376, +2014. +Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D Lee, Qi Lei, Runzhe Wang, and Jiaqi +Yang. Optimal gradient-based algorithms for non-concave bandit optimization. Advances in +Neural Information Processing Systems, 34:29101–29115, 2021. +Hsu Kao, Chen-Yu Wei, and Vijay Subramanian. +Decentralized cooperative reinforcement +learning with hierarchical information structure. In International Conference on Algorithmic +Learning Theory, pages 573–605. PMLR, 2022. +Robert Kleinberg and Tom Leighton. +The value of knowing a demand curve: Bounds on +regret for online posted-price auctions. In 44th Annual IEEE Symposium on Foundations of +Computer Science, 2003. Proceedings., pages 594–605. IEEE, 2003. +13 + +Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. +The International Journal of Robotics Research, 32(11):1238–1274, 2013. +John Langford and Tong Zhang. The epoch-greedy algorithm for contextual multi-armed ban- +dits. Advances in neural information processing systems, 20(1):96–1, 2007. +Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, and Ufuk Topcu. No-regret +learning in dynamic Stackelberg games. arXiv preprint arXiv:2202.04786, 2022. +Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, +David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv +preprint arXiv:1509.02971, 2015. +Yang Liu and Yiling Chen. A bandit framework for strategic regression. Advances in Neural +Information Processing Systems, 29, 2016. +Janusz Marecki, Gerry Tesauro, and Richard Segal. Playing repeated Stackelberg games with +unknown opponents. In Proceedings of the 11th International Conference on Autonomous +Agents and Multiagent Systems-Volume 2, pages 821–828, 2012. +Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, and Kannan Ramchandran. +On the value of interaction and function approximation in imitation learning. Advances in +Neural Information Processing Systems, 34:1325–1336, 2021. +Daniel Russo and Benjamin Van Roy. Eluder dimension and the sample complexity of optimistic +exploration. Advances in Neural Information Processing Systems, 26, 2013. +Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani. Deep reinforce- +ment learning framework for autonomous driving. Electronic Imaging, 2017(19):70–76, 2017. +Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement +learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016. +Milind Tambe. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned. +Cambridge University Press, 2011. +Heinrich von Stackelberg. +Market Structure and Equilibrium. +Springer Science & Business +Media, 2010. +Martin J Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint, volume 48. +Cambridge University Press, 2019. +Chih-Chun Wang, Sanjeev R Kulkarni, and H Vincent Poor. Bandit problems with side obser- +vations. IEEE Transactions on Automatic Control, 50(3):338–355, 2005. +Michael Wooldridge. An introduction to multiagent systems. John wiley & sons, 2009. +Annie Xie, Dylan Losey, Ryan Tolsma, Chelsea Finn, and Dorsa Sadigh. +Learning latent +representations to influence multi-agent interaction. In Conference on robot learning, pages +575–588. PMLR, 2021. +Boling Yang, Liyuan Zheng, Lillian J Ratliff, Byron Boots, and Joshua R Smith. Stackelberg +maddpg: Learning emergent behaviors via information asymmetry in competitive games. +2022. +Yaolong Yu, Haifeng Xu, and Haipeng Chen. Learning correlated Stackelberg equilibrium in +general-sum multi-leader-single-follower games. arXiv preprint arXiv:2210.12470, 2022. +14 + +Kaiqing Zhang, Zhuoran Yang, and Tamer Ba¸sar. Multi-agent reinforcement learning: A selec- +tive overview of theories and algorithms. Handbook of Reinforcement Learning and Control, +pages 321–384, 2021. +Han Zhong, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Can reinforcement learning +find Stackelberg-Nash equilibria in general-sum Markov games with myopic followers? arXiv +preprint arXiv:2112.13521, 2021. +Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, and Michael I Jordan. +The sample complexity of online contract design. arXiv preprint arXiv:2211.05732, 2022. +15 + +A +Proofs in Section 3 +A.1 +Proof of Theorem 3.1 +Proof. Consider Example 3.2. The expected reward is given by +hθ(a, b) := θ · φ(a, b) = (1 − b)θ−d · a + b(1 − ∆), +(27) +Optimizing over b ∈ [0, 1] yields +hθ(a) = max{1 − ∆, θ−d · a}. +(28) +Note that for any a ∈ A such that θ−d · a < 1 − ∆, the best response of the follower is b = 1, +yielding an expected reward of 1−∆; for any a ∈ A such that θ−d ·a ≥ 1−∆, the best response +of the follower is b = 0, yielding an expected reward of θ−d · a. The optimal joint response +a = θ−d and b = 0 achieves the optimal expected reward of ∥θ−d∥ = 1 > 1 − ∆. From the +leader’s perspective, this now reduces to the problem of a ReLU bandit considered in Dong et al. +[2021], since the response provides no information until the average regret falls below ∆. Thus +we have +inf +ˆπ sup +θ∈Θ +R(T) ≥ Ω(T 1− +1 +d−2 ). +A.2 +Proof of Theorem 3.3 +Proof. Let H(ǫ) be a minimal ǫ-covering of H under the metric ∥ · ∥∞. Let +A(ǫ) = +� +arg max +a∈A +max +b∈B h(a, b) | h ∈ H(ǫ) +� +, +where we break ties arbirarily when the optimal action is non-unique. +Note that we have +|A(ǫ)| ≤ |H(ǫ)| ≤ N(ǫ). Let h⋆ be the true reward function. By the definition of a covering, +there exists some hǫ ∈ H(ǫ) such that ∥h⋆ − hǫ∥∞ ≤ ǫ. Thus we have +R(T) = +T +� +t=1 +E[h +⋆(a∗) − h +⋆(at)] ≤ ǫT + +T +� +t=1 +E[h +⋆ +ǫ(a∗) − h +⋆ +ǫ(at)]. +We know that the optimal action for hǫ must be inside the set A(ǫ). +Thus any worst-case +optimal no-regret algorithm on the set A(ǫ) gives a regret of +� +|A(ǫ)|T ≤ +� +N(ǫ)T . This gives +that +R(T) ≤ ǫT + +� +N(ǫ)T . +Taking infimum over ǫ finishes the proof. +B +Proofs in Section 4 +B.1 +Proof of Lemma 4.5 +Proof. Recall the notation from Example 4.4: let θ(b) +t += ΠA(ˆθt) for t ≥ 2, with ˆθt := +1 +t−1 +�t−1 +i=1 ˆbi. +The first round incurs at most a constant regret and can be ignored. It suffices to show that, +with probability at least 1 − δ, +∥θ − θ(b) +t ∥ ≤ αt +√ +t +(29) +16 + +for αt = Θ +� +σb +� +d + log T +δ +� +. +First, we bound the distance between ˆθt and θ. By our assumption, +∥ˆθt − θ∥ = +��� +1 +t − 1 +t−1 +� +i=1 +wi +���, +where w1, . . . , wt are i.i.d. zero-mean σb-sub-Gaussian. We proceed using a covering argument. +Construct U ⊆ Sd−1 such that +inf +v∈Sd−1 sup +u∈U +u · v ≥ 1 +2. +(30) +Note that ∥u − v∥ = √2 − 2u · v for u, v ∈ Sd−1. Hence, equivalently, we may choose U as a +minimal 1-covering of Sd−1 in Euclidean metric. Then +log |U| ≤ log N int(Sd−1, 1, ∥ · ∥) ≤ log M(Bd, 1, ∥ · ∥) = Θ(d), +(31) +where N int and M denote the internal covering number and the packing number of the space +under a given metric. The choice of U ensures that +∥w∥ ≤ 2 sup +u∈U +u · w +(32) +for all w ∈ Rd, and ignoring the constant factor, we may focus on upper bounding supu∈U +�t−1 +i=1 u· +wi. +For each choice of u ∈ U, let Zu,i = u · wi, so that Zu,1, . . . , Zu,t−1 are i.i.d. zero-mean +σb-sub-Gaussian by definition of sub-Gaussian random vectors. By Hoeffding’s inequality for +sub-Gaussian random variables, we have +P +� +t +� +i=1 +Zu,i > x +� +≤ exp +� +− x2 +2tσ2 +b +� +(33) +for all x > 0. Applying union bound over U and using (32) gives +P +����� +t +� +i=1 +wi +���� ≥ 2x +� +≤ P +� +sup +u∈U +t +� +i=1 +Zu,i ≥ x +� +≤ |U| exp +� +− x2 +2tσ2 +b +� +. +(34) +Choosing x = σb +� +2t log(|U|T) ≲ σb +� +t(d + log T +δ ) ensures that, by another union bound over +t ∈ [T], +∥ˆθt − θ∥ ≲ σb +� +t−1� +d + log T +δ +� +(35) +with probability at least 1 − δ. By the triangle inequality and the definition of projection, +∥θ(b) +t +− θ∥ ≤ ∥θ(b) +t +− ˆθt∥ + ∥ˆθt − θ∥ ≤ 2∥ˆθt − θ∥ ≲ σb +� +t−1� +d + log T +δ +� +(36) +with the same probability. This gives (29) and completes the proof. +B.2 +Proof of Proposition 4.6 +Proof. We will condition upon the validity of the confidence sets, which happens with probability +at least 1 − δ per our choice of {αt}t∈[T]. +17 + +UCB always chooses at in the confidence set Θt, with radius of order O +� +σb +� +t−1(d + log T +δ ) +� +. +When θ⋆ ∈ Θt, we have ∥at − θ⋆∥ ≲ σb +� +t−1(d + log T +δ ). Since both at and θ⋆ are unit vectors, +we have +RUCB(T) ≤ 2δT + +T +� +t=1 +� +1 − θ⋆ · at +� += 2δT + 2 + 1 +2 +T +� +t=1 +∥θ⋆ − at∥2 +≲ 2δT + +T +� +t=2 +σ2 +b +t +� +d + log T +δ +� += O +� +δT + σ2 +b log T · +� +d + log T +δ +�� +, +where the term 2δT bounds the contribution of the event that the confidence sets fails to be all +valid. Choosing δ = 1/T gives our desired bound. +B.3 +Proof of Proposition 4.10 +Proof. After the first round, the leader’s task reduces to a linear bandit with action space A1: +only actions within A1 will be played, and the reward is linear in this region. As is well known +for linear bandit (e.g., Russo and Van Roy [2013]), with probability 1 − δ, the regret in this +linear stage (i.e., excluding the first round) is upper bounded by +2δT + O +�� +d log T · (d log T + log δ−1) · T +� +. +The first round adds at most a constant to this and can be ignored. By choosing δ = T −1, we +have +RUCB(T) ≤ �O(d +√ +T). +(37) +B.4 +Proof of Proposition 4.11 +Proof. Let Θ1 = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, and denote the true parameter by θ⋆ = (θ⋆ +a, θ⋆ +b). +By our assumption on the problem structure, we have θ⋆ +a ∈ Θ(b). +As in the proof of Theorem 3.3, let Θ(ǫ) be a minimal ǫ-covering of Θ1 in Euclidean metric, +with ǫ > 0 to be specified later. In particular, there is some ˜θa ∈ Θ1 with ∥˜θa − θ⋆ +a∥ ≤ ǫ. Let +A(ǫ) = {arg maxa∈A ReLU(θa · a − ∆) | θa ∈ Θ(ǫ)}, where we break tie arbitrarily when the +optimal action is non-unique. Note that |A(ǫ)| ≤ |Θ(ǫ)| = N(Θ1, ǫ, ∥ · ∥). +Now, let the leader play UCB on the discrete action set A(ǫ) after the first round. The +regret satisfies +R(T) ≤ 1 + +T +� +t=2 +E +� +h +⋆(a∗) − h +⋆(at) +� +≤ 1 + T · E +� +h +⋆(a∗) − h +⋆(˜a∗) +� ++ +T +� +t=1 +E +� +h +⋆(˜a∗) − h +⋆(at) +� +, (38) +where a∗ = θ⋆ +a and ˜a∗ ∈ arg maxa∈A(ǫ) h +⋆(a). Note that h +⋆(˜a∗) ≥ h +⋆(˜θa) ≥ h +⋆(a∗) − ǫ by our +choice of ˜θa and A(ǫ), the second term in (38) is at most ǫT. The third term, the regret of UCB +on A(ǫ), is bounded by O( +� +N(Θ1, ǫ, ∥ · ∥) · T) in expectation. +It remains to bound N(Θ1, ǫ, ∥ · ∥). Note that for any θa, θ′ +a ∈ Θ1, we have +θa · θ′ +a = (θa · b1)(θ′ +a · b1) + (θa − (θa · b1)b1) · (θ′ +a − (θ′ +a · b1)b1) +≥ ζ2 − ∥θa − (θa · b1)b1∥∥θ′ +a − (θ′ +a · b1)b1∥ +≥ ζ2 − (1 − ζ2) = 2ζ2 − 1. +Equivalently, ∥θa − θ′ +a∥ = +� +2 − 2θa · θ′a ≤ 2 +� +1 − ζ2 = 2Cζ. Thus, the covering number of +Θ1 is upper bounded by +� KCζ +ǫ +d) for some absolute constant K, which yields a regret bound +18 + +of 1 + ǫT + O( +� +KdCd +ζ T/ǫd). +Choosing ǫ ≍ (KCζ) +d +d+2T − +1 +d+2 reduces this upper bound to +O +� +C +d +d+2 +ζ +T +d+1 +d+2 +� +as desired. +C +Proofs in Section 5 +C.1 +Proof of Proposition 5.2 +Proof. Let the leader run the phased elimination algorithm Huang et al. [2021, Algorithm 6] +using the response b∗ +θ(at) as the proxy reward to maximize. This proxy reward, in expectation, +is a homogeneous polynomial of degree 2k − 1. By Corollary 3.16 in Huang et al. [2021], the +algorithm achieves +�R(T) ≤ �O +�√ +d2k−1T +� +, +(39) +where �R(T) = �T +t=1 1 − b∗ +θ(at) is the proxy regret measured based on the the proxy reward +(i.e., absolute response). Note that the reward is maximized exactly when the proxy reward is +maximized. Thus, the Lipschitz property (19) suggests that +R(T) ≤ +2k +2k − 1 +�R(T) ≤ �O( +√ +d2k−1T). +(40) +19 + diff --git a/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt b/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..0fa2656012e9296a26a21cc7963eac6364fb2cea --- /dev/null +++ b/idFJT4oBgHgl3EQfWizT/content/tmp_files/load_file.txt @@ -0,0 +1,686 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf,len=685 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='11518v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='LG] 27 Jan 2023 Online Learning in Stackelberg Games with an Omniscient Follower Geng Zhao ∗ † Banghua Zhu ∗ † Jiantao Jiao ∗ Michael I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Jordan ∗ January 30, 2023 Abstract We study the problem of online learning in a two-player decentralized cooperative Stack- elberg game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In each round, the leader first takes an action, followed by the follower who takes their action after observing the leader’s move.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The goal of the leader is to learn to minimize the cumulative regret based on the history of interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Differing from the tradi- tional formulation of repeated Stackelberg games, we assume the follower is omniscient, with full knowledge of the true reward, and that they always best-respond to the leader’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We analyze the sample complexity of regret minimization in this repeated Stackelberg game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We show that depending on the reward structure, the existence of the omniscient follower may change the sample complexity drastically, from constant to exponential, even for linear cooperative Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This poses unique challenges for the learning process of the leader and the subsequent regret analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1 Introduction The multi-agent learning problem [Ferber and Weiss, 1999, Wooldridge, 2009, Filar and Vrieze, 2012, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2021] has received significant attention reflecting its wide variety of real-world applications, including autonomous driving Shalev-Shwartz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2016], Sallab et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2017] and human-robot interaction Kober et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2013], Lillicrap et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2015], Goodrich et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2008], Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In a multi-agent system, it is natural to assume that each agent possesses a dif- ferent set of information due to its different viewpoint and history of actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This phenomenon is commonly referred to as the property of information asymmetry Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Such information asymmetry poses challenges to the coordination and cooperation between learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this paper, we study how the information asymmetry affects the sample complexity of learning a two-player decentralized cooperative repeated Stackelberg game, with a focus on the setting when the follower is omniscient and myopic, and always best-responds to the leader’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider an illustrative example in human-robot interaction where a robot is required to collaborate with a human to achieve some shared objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This can be formulated as a repeated Stackelberg game where the interactions between human and robot happen in multiple rounds, and the human is an omniscient expert who knows the exact target and how to achieve it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In each round, the robot, as the leader who hopes to learn the world model and human behavior from scratch, first takes some action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' After seeing the robot’s action, the human, as an expert follower who possesses perfect information about the world, always best-responds to the robot’s action to maximize their reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The robot hopes to use as few as possible interactions to learn the world model and human behavior, and eventually find the optimal action that maximizes a shared reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' ∗University of California, Berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Email: {gengzhao,banghua}@berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='edu,jiantao@eecs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='edu, jordan@cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='berkeley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='edu †The two authors contributed equally to this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1 Concretely, during each round t of the interaction, the leader first plays an action at ∈ A, and the follower plays another action bt ∈ B upon (perfectly) observing at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We assume that the two players share a reward, rt = h⋆(at, bt) + zt, where zt ∈ R is some zero-mean sub- Gaussian noise, h⋆ belongs to a family H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We also assume that the follower has full knowledge of the reward and always best responds with bt ∈ arg maxb∈B h⋆(at, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, the leader does not know h⋆ and can only explore via taking actions at and making inferences from past observations (a1, b1, r1), · · · , (at−1, bt−1, rt−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 We are interested in providing tight bound for the Stackelberg regret, defined as R(T) = max a∈A E � T � t=1 � max b∈B h⋆(a, b) − max bt∈B h⋆(at, bt) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The Stackelberg regret characterizes the gap between the reward achieved from the optimal leader action and the reward from the actual leader action at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Compared with the traditional bandit problem, the extra observation of bt can be viewed as side information accompanying the usual action-reward pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Depending on how the function family H and side information b are designed, the complexity of learning for the leader may vary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Here we briefly summarize several illustrative examples where the follower may help or harm the leader’s learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We will present a general formalization that encompasses these examples in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Curse of expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Imagine that in a driving system, the self-driving vehicle (leader) and the human driver (follower) work together to avoid collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For most of the aggres- sive actions the leader takes, the final reward for non-collision is high since the human driver will consistently exert efforts to evade the self-driving vehicle in order to prevent collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' From the leader’s point of view, aggressive actions lead to similar outcomes as safe actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The expertise of the human prevents the leader from learning from failure cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Imitation Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider an assembly robot (leader) that learns to move goods to a destination with a human expert (follower).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This can be modeled by the robot choosing a drop-off location, from which the human expert continues to the correct destination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this simple example, the robot and the human expert cooperate in a “linear” fashion— the expert can complete whatever the robot leaves undone, and upon observation of the expert’s move the robot should simply imitate the behavior of the human expert in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This corresponds to an “imitation-based” interaction that can greatly accelerate the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Expert-guided learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In most cases, the self-driving vehicle may have some target that is similar but not exactly the same as the human driver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For example, they both aim to avoid collision while heading to a different target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this case, a pure imitation- based learning will fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' But the self-driving vehicle can still glean good driving standards from the human driver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' With the extra observation of the behavior of human driver, the self-driving vehicle can learn much faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this paper, we abstract and formalize these three scenarios into a simple linear Stackelberg game and analyze the the sample complexity of this game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We briefly overview our main results in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1For simplicity we assume in the introduction that the leader can see b1, · · · , bt−1 without noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Later we generalize to the case when the observed bt is also noisy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Main results Contrary to the traditional literature on linear bandits, we show that the worst-case sample complexity for achieving ǫ-Stackelberg regret is at least exponential even when h⋆ belongs to the linear family Hφ = {θ · φ(a, b)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The hard instance corresponds to the ‘curse of expertise’ example discussed above, where the follower’s best response hurts the observation, and thus harms the whole learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 (Curse of expertise, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There exists some φ such that for any algorithm, we can find some h⋆ ∈ Hφ with the regret being Ω(T (d−3)/(d−2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This shows that the leader needs an exponential number of samples to learn a good policy even when the reward is linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We also present an upper bound O(T (d+1)/(d+2)) for linear rewards in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' On the other hand, the side information bt can also greatly improve the sample complexity when the linear family is structured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We provide an Upper Confidence Bound (UCB) based algorithm [Auer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2002] that leads to an improved bound in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In particular, we recover the rate for imitation learning when the leader can simply mimic the behavior of the follower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 (Imitation learning, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There exists some φ such that for any h⋆ ∈ Hφ, when bt is observed, the leader can achieve regret O(log2(T)) by imitating the follower behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, when bt is not observed, the regret is Θ( √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Similarly, we can also design cases where observing bt helps reduce the problem to a tradi- tional linear bandit, while not observing bt suffers from exponential sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 (Expert-guided, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There exists some φ such that for any h⋆ ∈ Hφ, when bt is observed, the leader can achieve regret O( √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, when bt is not observed, the regret is Ω(T (d−3)/(d−2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In addition to these three examples, we discuss more complicated scenarios where UCB fails and we show that a careful analysis is necessary to achieve a near-optimal rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In particular, we establish such a rate for polynomial bandits, where the best-response corresponds to a lower degree polynomial, which helps improve the rate when the noise level for reward and the observed follower behavior is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4 (Polynomial bandit, informal).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There exists a family of 2k-degree polynomial, such that the regret is Θ( √ d2k−1T) when bt is observed, and Θ( √ d2kT) when bt is not observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Related work Decentralized Stackelberg Games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The problem of repeated Stackelberg games has been studied extensively [von Stackelberg, 2010, Marecki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2012, Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2022, Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2022], in a standard setting where the leader leads and the myopic follower follows with its best response for the current round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] and Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] study a similar setting to ours, in which a leader and a follower interact through a cooperative Stackelberg game that comprises a two-stage bandit problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] restrict their focus to the tabular case where both A and B are finite and the reward h⋆ is uncorrelated for different actions (a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' They also assume that both the leader and the agent are running regret-minimization algorithms independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' They show that the classic upper confidence bound (UCB) algorithm for the multi-arm bandit problem can be used for both the leader and the agent, respectively, to achieve asymptotically optimal performance (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', no-regret).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, it is unclear that such results can generalize to bandits with function approximation and the case of omniscient agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Indeed, our results show 3 that the general case (or even just the linear case) is not always statistically tractable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note also that Lauffer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] show that the regret can depend exponentially on the dimension of the agent’s utility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Other examples of Stackelberg games include Stackelberg security games [Conitzer and Sandholm, 2006, Tambe, 2011], strategic learning Hardt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2016], Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2018], Liu and Chen [2016], dynamic task pricing [Kleinberg and Leighton, 2003] and online contract design [Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2014, Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The problem of online learning in contract theory considers a decentral- ized general-sum Stackelberg game with omniscient agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It focuses on a special case where the rewards for the leader and the agent are both linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It is shown in Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] that one has to pay exponential sample complexity in this setting to achieve small regret in the worst case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Centralized Stackelberg Game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Centralized Stackelberg games are also well studied in the literature [Zhong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2021, Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2021, Gerstgrasser and Parkes, 2022, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2022], where the machine learning algorithm has control over both the leader and the fol- lower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Bai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021] consider the repeated Stackelberg game where both the leader and the agent learn their optimal actions (a Stackelberg equilibrium) from samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, they assume a central controller that can determine the actions of both the leader and the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Moreover, they rely on an assumption of a bounded gap between the optimal response and an ǫ-approximate best response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In contrast, in our framework, we assume that the agent’s utility is unknown, and that the agent always takes the best response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Bandit with side information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There has been significant effort in studying bandits with side information [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2005, Langford and Zhang, 2007, Foster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Such side information is generally assumed to be available before a decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Foster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021] also consider the case when an extra observation is available after taking the actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, they mainly focus on the setting of reinforcement learning where the extra observation is the trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Although our observation of follower behavior can also be viewed as side information, it also alters the reward in the Stackelberg game, which changes the structure of the multi-agent problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2 Formulation We consider a two-player cooperative Stackelberg bandit game with an omniscient follower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A ⊆ Rd1 and B ⊆ Rd2 be compact sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Up to a scaling factor, we will assume that A and B reside inside the unit ball centered at the origin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' During each round t ∈ [T] of interaction, the leader plays an action at ∈ A, and the follower plays bt ∈ B upon (perfectly) observing at.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The two players both receive a reward rt = h⋆(at, bt) + zt, where zt ∈ R is zero-mean σr-sub- Gaussian and is independent of all past events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We will make the realizability assumption that h⋆ belongs to a (known) family H of real-valued functions on Bd1 × Bd2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' As is common in the study of bandits, we assume that reward function is bounded, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', there exists C ∈ (0, ∞) such that 0 ≤ h ≤ C for all h ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We assume C = 1 throughout the paper unless stated otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We will assume that the follower, modeled after an expert human player, has full knowledge of the game and can always best respond with an optimal action bt ∈ arg maxb∈B h⋆(at, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The leader then makes a noisy observation of bt, given by ˆbt = bt+wt, where wt ∈ Rd2 is zero-mean σb- sub-Gaussian (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', component-wise σb-sub-Gaussian with independent zero-mean coordinates) and independent of all past events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For convenience, we denote the set of best responses to leader’s action a when the ground truth reward function is h by b∗ h(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Denote h(a) := maxb∈B h(a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The optimal action, unbeknownst to the leader, is denoted a∗ := arg maxa∈A h ⋆(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 4 The leader’s objective is to minimize the regret during T rounds of interactions, defined as R(T) = max a∈A E � T � t=1 h(a) − h(at) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (1) We will also focus on the sample complexity of achieving low (average) regret;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' that is, for some ǫ, δ ∈ [0, 1], the minimal T ∈ N such that R(T) ≤ ǫT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Notations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We use calligraphic letters for sets and operators, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Given a set A, we write |A| for the cardinality of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Bd and Sd−1 denote the unit ball and the unit sphere, both centered at the origin, in d-dimensional Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Vectors are assumed to be column vectors except for the probability and measure vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For a vector v ∈ Rd and an integer i ∈ N, we use vi to denote the i-th element of v, and v−i to denote the vector of all elements in v except for vi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For two n-dimensional vectors x and y, we use x · y = x⊤y to denote their inner product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We write f(x) = O(g(x)) or f(x) ≲ g(x) if there exists some positive real number M and some x0 such that |f(x)| ≤ Mg(x) for all x ≥ x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We use �O(·) to be the big-O notation ignoring logarithmic factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We write f(x) = Ω(g(x)) or f(x) ≳ g(x) if there exists some positive real number M and some x0 such that |f(x)| ≥ Mg(x) for all x ≥ x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We write f(x) = Θ(g(x)) if we have both f(x) = O(g(x)) and f(x) = Ω(g(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We use ∥ · ∥p to denote the ℓp norm for p ∈ (0, ∞], with ∥ · ∥ denoting the Euclidean (ℓ2) norm ∥ · ∥2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Parameterized family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In subsequent discussions, we will consider the parameterized case when H admits a parameterization over a compact parameter space Θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The class is denoted by HΘ = {hθ|θ ∈ Θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When the parameterization is linear, that is, hθ(a, b) = θ · φ(a, b) (2) for some feature function φ : A × B → Bd, we will denote the class by HΘ,φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We denote the true parameter by θ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For instance, when A and B are the sets of standard basis vectors in R|A| and R|B| with φ(a, b) = ab⊤ and θ is bounded in R|A|×|B|, we recover the tabular case model in Kao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2022] with finite action sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In general, however, we will focus on cases with infinite action sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 3 Linear Stackelberg games: Curse of expertise In this section, we study the sample complexity of learning in linear Stackelberg game, where the family of reward is restricted to HΘ,φ for some given Θ and φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 An exponential lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It is well known that the regret for traditional linear bandits grows as Θ(d √ T) [Abbasi-Yadkori et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2011].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In the case of a linear Stackelberg game, we present a worst-case lower bound on the regret that is exponential in dimensionality for the linear family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This suggests that the leader cannot learn the task well unless in possession of an exponential number of samples even when we restrict to linear Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Assume the leader makes perfect observations of the follower’s responses (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', σb = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We have the following lower bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For any d ≥ 3, there exists some φ such that, for any algorithm that the leader runs, one can find some instance with hθ ∈ HΘ,φ such that R(T) ≳ T (d−3)/(d−2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (3) 5 In other words, the sample complexity for achieving ǫ (average) regret is at least Ω � (1/ǫ)d−2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The proof is detailed in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The worst-case instance presented below can be reduced to the ReLU bandit problem shown below, which is known to suffer from the exponential sample complexity Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A = Bd−1, B = [0, 1] and Θ = {θ | θ−d ∈ Sd−2, θd = 1 − ∆} for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let the feature function be φ(a, b) = ((1 − b)a, b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' One can verify that in this case, one has hθ(a) = max{1 − ∆, θ−d · a}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (4) Thus when a is chosen far from θ−d, the reward will remain constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 is no mystery mathematically: the best response may destroy linearity for the leader’s observations, imposing a toll.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Conceptually, however, the message from the theorem is striking: it highlights a “curse of expertise”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', the potential difficulty to learn with an expert on a decentralized bandit learning task with a large action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' From the classic single-agent bandit learning perspective, the task the two agents aim to solve is straightforward: a linear bandit on an action space φ(A, B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In other words, if the expert follower lets the novice leader control the choice of b, the average regret would steadily decrease at a rate of � O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' On the other hand, with a myopic focus, the follower’s expertise in best responding ironically results in a significantly higher regret, as it deprives the learner of the ability to explore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In the context of autonomous driving, for example, this can manifest in scenarios where the autonomous vehicle takes a poor action (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', an aggressive lane change) yet other vehicles or pedestrian immediately respond by slowing down or steering away to avoid a possible collision, thereby hiding the potential negative consequences of the action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The lack of coordination and the constant best response from the follower, both common in practice, makes it hard for the leader to efficiently learn the reward landscape or improve their current policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 An exponential upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For any a class H of reward functions on a pair of actions (a, b), an upper bound on the sample complexity (and regret) can be obtained using a covering argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let N(ǫ) = N(H, ǫ, ∥ · ∥∞) denote the ℓ∞ covering number of H with radius ǫ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Then we can achieve R(T) ≲ inf ǫ>0 ǫT + � N(ǫ)T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (5) To achieve this, simply compute an ǫ-covering of H and let the leader play no-regret algo- rithms on the ǫ-covering set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that although the covering is constructed for pair of actions (a, b) ∈ Aǫ×Bǫ, it suffices for the leader to run no-regret algorithms on actions Aǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The detailed algorithm and proof are given in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This upper bound is achieved when the leader does not even utilize the observations of the follower’s responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Indeed, in the worst case (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2), the responses will not provide information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' As a corollary, in the linear regime with HΘ,φ, the covering number is N(ǫ) = N(Θ, ǫ, ∥·∥) ≤ exp � O � d log 1 ǫ �� Wainwright [2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Choosing ǫ ≍ T −1/(d+2), Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 reduces to the following upper bound in the linearly parameterized case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In the linear case, we can achieve R(T) ≲ T (d+1)/(d+2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In other words, the sample complexity for achieving average regret equal to ǫ is upper bounded by � O � 1/ǫ �d+2� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This upper bound is agnostic to any structural property of the feature function φ, such as smoothness or even continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 6 4 UCB with side observations Although the worst-case sample complexity for linear Stackelberg games is exponential, it is possible to obtain a fine-grained analysis and improved rate for the family HΘ,φ when φ is better structured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A natural choice of algorithm for the leader is some variant of UCB that incorporates observations of the follower’s actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this section, we will describe a general recipe for a family of UCB algorithms to incorporate the side information as well as the challenge in their design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Algorithm description We consider the following variant of UCB that uses the follower’s responses as side information to improve the confidence set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Algorithm 1 UCB with side information from expert Input: Regression oracles Reg(b) and Reg(r) on reward and response, {αt}t∈[T], {βt}t∈[T] for t = 1 to T do Compute h(b) t = Reg(b)(ˆb1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' ,ˆbt−1) and h(r) t = Reg(r)(r1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' , rt−1) Set H(b) t := {h : �t−1 i=1 ∥b∗ h(ai) − b∗ h(b) t (ai)∥2 ≤ α2 t } Set H(r) t := {h : �t−1 i=1 � h(ai) − h (r) t (ai) �2 ≤ β2 t } Construct confidence set Ht = H(b) t ∩ H(r) t Take action at ∈ arg maxa∈A suph∈Ht h(a) Observe (noisy) reward rt and response ˆbt end for Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The regression oracles and the sequences {αt}t∈[T], {βt}t∈[T] must be chosen ap- propriately so that the following condition holds: Given an error tolerance δ ∈ (0, 1), we require h⋆ ∈ �T t=1 Ht with probability at least 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A common choice for Reg(b) and Reg(r) is the least-squares regression oracle that computes h(b) t ∈ arg min h∈H t−1 � i=1 ∥b∗ h(ai) − ˆbi∥2 (6) and h(r) t ∈ arg min h∈H t−1 � i=1 (h(ai) − ri)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (7) When the least-squares computation becomes infeasible under complex response-reward struc- tures (this is common for (6)), custom oracles need to be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A more intricate approach may be to jointly construct the estimate using both {ˆbτ}τ∈[t−1] and {rτ}τ∈[t−1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We leave it for future research to study systematic designs of the oracles and the confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When the responses are unobserved or ignored (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', by choosing αt = ∞), Algorithm 1 reduces to the classic Eluder UCB using the least-squares (reward) oracle with Ht = H(r) t Russo and Van Roy [2013].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The choices of {αt}t∈N and {βt}t∈N can pose another challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' An naive attempt to get a generic upper bound on αt is to use a covering argument as in Russo and Van Roy [2013] using the following measurement between two functions h, h′ ∈ H: d(b)(h, h′) = supa ∥b∗ h(a) − b∗ h′(a)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' But note that this does not necessarily define a norm, and further the covering number of H in this sense can be infinite when the best response is discontinuous in the leader’s action a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, such an approach is often not useful and one may have to determine αt on a per instance basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Examples While Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 shows that the involvement of the omniscient follower can lead to “curse of expertise,” a stark deterioration in the sample complexity, there are many scenarios where the leader’s observation of the follower’s responses can expedite learning significantly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In this section, we will explore a few such examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 An imitation-based example Let us consider a setting where the leader achieves efficient learning through imitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Heuris- tically, imitation arises when the optimal action for the leader is equal to the best response for the omniscient follower or a function of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This may capture, for instance, real-world robotics applications where the actions of the robot and the human expert are exchangeable and the true goal can be easily inferred from the expert’s action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A simple scenario is when the robot and the human expert are supposed to carry out the same task perfectly, in which case the robot should simply treat the expert as a role model and imitate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The following is a concrete example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A = B = Θ = Sd−1 (or Bd equivalently)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider the linearly parameter- ized function class HΘ,φ with feature function φ(a, b) = a + b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (8) Here, the optimal response b∗ θ ≡ θ is independent of a, and hθ(a) = θ · a + 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Construction of confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The (noisy) observations of the follower’s best responses simplify the problem into an imitation learning task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A simple oracle for the best-response obser- vations is to take the A-projected empirical average of responses, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', θ(b) t = ΠA � 1 t−1 �t−1 i=1 ˆbi � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 The response-based confidence set reduces to Θ(b) t = � θ ∈ Θ ���∥θ − θ(b) t ∥ ≤ αt √t − 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Standard sub-Gaussian concentration results suggest that the (Euclidean) radius of this confi- dence set shrinks at a rate of t−1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' To ensure θ⋆ ∈ � t∈[T] Θt with probability at least 1 − δ, it suffices to choose αt = Θ � σb � d + log T δ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' UCB chooses actions on Sd−1 increasingly close to the empirical estimate θ(b) t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4 The regret bound follows from these choices of confidence sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4, UCB achieves a regret bound RUCB(T) ≲ σ2 b log T · (d + log T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (9) In other words, the average regret decays at a rate of �O(σ2 bd/T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This has also been analyzed in the setting of imitation learning [Rajaraman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', 2021], and the results are consistent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2While it is customary to consider Θ = Bd, we will observe below that the imitation-based algorithm does not crucially rely on ∥θ⋆∥ and only incurs smaller regret if ∥θ⋆∥ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This is because the algorithm asymptotically relies solely on the response observations, which are invariant under scaling of θ⋆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It is also without loss of generality to restrict all actions to the sphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 3Define the projection of y ∈ Rd onto a closed set X ⊆ Rd as ΠX(y) := arg minx∈X ∥y − x∥, breaking ties arbitrarily when the minimizer is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 4Even simpler, the leader can play the A-projected empirical average of responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Under our choice of constant α, the analysis will be the same, with the result differ by at most a constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 8 Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When the follower’s responses are unobserved, this is simply a linear bandit, where the minimax regret is Ω(σbd √ T) ≫ O(σ2 bd log2 T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This indicates the value of the bt observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When the follower’s response is noiseless, one can see that a single sample suffices to find the optimal response since one always observes b⋆ θ = θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note the gap in the Θ(log T) regret when the response observations are used and the Θ( √ T) regret when they are ignored or unavailable, showing the value of those response observa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In fact, it is easy to modify this example slightly (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', taking φ(a, b) = max{|θ⊤a|, ∆}b for some ∆ ∈ (0, 1)) to create an even larger gap: When the leader uses the response observations, the regret is � O(d log T) with sample complexity � O � d log 1 ǫ � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When the response observations are unavailable, the sample complexity increases to Ω(ǫ−d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Expert-guided exploration In many scenarios, the omniscient follower’s actions may not directly reveal the exact state of the world but still provide crucial information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The next example illustrates a simple setting where the follower’s response can significantly reduce the sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A = B = Sd−1 and Θ = {(θa, θb) ∈ Sd−1 × Sd−1|θa · θb ≥ ζ} for some ζ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider the parameterized family of functions HΘ = {hθ|θ ∈ Θ} where hθ(a, b) = ReLU(θa · a − ∆) + θb · b, for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For simplicity, we will assume that the response observations are noiseless (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', σb = 0), although the noisy case can be analyzed analogously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Confidences sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The best response is b∗ θ ≡ θb, again independent of the leader’s action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Upon observing b1 = θb, the leader should construct confidence sets Θ(b) t = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, while Θ(r) t is chosen as in linear UCB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' As a result, all subsequent actions the leader takes must fall into A1 := {a ∈ A|a · b1 ≥ ζ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (10) This refinement of the action set will reduce the sample complexity, and depending on the size of ζ relative to ∆, the reduction can be significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Strong reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When 1 − ζ ≤ (1 − ∆)/4, the leader learns that θa · b1 ≥ ζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In particular, any action a ∈ A1 must satisfy θa · a = 2 − ∥θa − a∥2 2 ≥ 2 − (∥θa − b1∥ + ∥a − b1∥)2 2 ≥ 2 − (2√2 − 2ζ)2 2 = 1 − 4(1 − ζ) ≥ ∆, (11) and thus h(a) = θa · a − ∆ + 1 behaves as a linear function within A1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By playing UCB within A1, the leader reduces the problem to a linear bandit instance and thus achieves the following regret bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Assume 1 − ζ ≤ (1 − ∆)/4 in Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' UCB achieves RUCB(T) ≤ �O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (12) This leads to a sample complexity of � O(d2/ǫ2), in contrast to the exponential sample com- plexity exp(O(d log 1 ǫ)) if the responses were unobserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Information from the follower’s re- sponse guides the leader’s exploration to the well conditioned part of the action space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Given the Ω(d √ T) sample complexity of linear bandits, the upper bound (12) is tight (up to logarithmic terms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 9 Weak reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When ζ is small relative to ∆, the problem does not immediately reduce to a linear bandit, but we have the following improved upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' There exists an algorithm Alg that achieves RAlg(T) ≤ O � (Cd ζ T d+1) 1 d+2� , (13) where Cζ := � 1 − ζ2 ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This bound improves as ζ decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The sample complexity is therefore �O(Cd ζ ǫ−d−2), a Cd ζ reduction compared with the original complexity without observing the responses in Corol- lary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Since the reduced problem is still a ReLU bandit, UCB will not be suitable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Instead, (13) can be achieved through discretization of A1 as the upper bound in Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 5 Beyond UCB Although the UCB algorithm gives a near-optimal rate in most of the above examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We also provide two cases where UCB fails to achieve the optimal rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This necessitates a tailored algorithm design in specific settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Nonlinear (polynomial) family UCB is known to fail to achieve the optimal rate in the case of the polynomial bandit family Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021], where the reward is a polynomial activation on top of a linear family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We construct an example which utilizes the structure of the polynomial bandit, formally defined below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 (Polynomial bandit).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider the convex function f(x) = x2k for some k ∈ Z+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A = Bd−1, B = [−1, 1], Θ = Bd−1 × {1}, (14) and φ(a, b) = (2kba, −f ∗(2kb)), (15) where f ∗ is the convex conjugate of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider the nonlinearly parameterized family HΘ := {hθ(a, b) = f(θ · φ(a, b)) | θ ∈ Θ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (16) By properties of the convex conjugate, hθ(a) = f(θ−d · a) = (θ−d · a)2k (17) with the best response b∗ θ(a) = arg max −1≤b≤1 2kbθ−d · a − f ∗(2kb) = f ′(θ−d · a) 2k = (θ−d · a)2k−1 ∈ [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This observation allows us to apply results on polynomial bandits Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 10 Response-regret structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Observe the following properties of the best response function in Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The expected reward is a function of the best response, independent of the true parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Namely, hθ(a) = b∗ θ(a) 2k 2k−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (18) This mapping is Lipschitz: ��hθ(a) − hθ(a′) �� ≤ 2k 2k − 1 ��b∗ θ(a) − b∗ θ(a′) ��, (19) and further arg max a∈A b∗ θ(a) = θ ∈ arg max a∈A hθ(a), (20) with both maxima being 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The response observation, as a degree 2k − 1 polynomial, is more informative than the reward observation, a degree 2k polynomial, when the noise levels are the same and θ−d ·a is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Based on these two observations, the leader may view the response bt as a proxy reward and aim to minimize the proxy regret �R(T) := T � t=1 1 − b∗ θ(at).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (21) This is consistent with minimizing the true regret R(T), which differs from the proxy regret �R(T) by at most a constant factor by (19).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Regret bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Using the response observations exclusively to minimize the proxy regret �R(T) = �T t=1 1 − b∗ θ(at), the leader reduces her task to a polynomial bandit problem with a degree 2k − 1 polynomial activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By (19), we may focus on bounding the proxy regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='16 from Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021] suggests that �R(T) ≤ �O( √ d2k−1T), (22) or equivalently the sample complexity is � O(d2k−1/ǫ2) for achieving ǫ average proxy regret.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The following bound on the true regret follows from (19) and (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1, there exists an algorithm Alg, using the response observations exclusively, that achieves RAlg(T) ≤ O( √ d2k−1T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (23) Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 suggests an � O(d2k−1/ǫ2) sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For instance, the leader can achieve this regret with the zeroth-order algorithm proposed in Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021, Algorithm 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 (Lower bound).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Since the reward observations have a higher signal-to-noise-ratio, we should expect that the sample complexity of Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 to be the same order as the sample complexity of achieving ǫ average regret in a degree 2k − 1 polynomial bandit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021] shows that this is lower bounded by Ω(d2k−1/ǫ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, (23) is essentially optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4 (Benefit of observing responses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' If the leader does not observe the responses, the problem is equivalent to a degree 2k polynomial bandit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The optimal regret without observing the experts actions will lead to an �O(d2k/ǫ2) sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, the response observations contribute to shaving of a factor of d, which can be significant when the dimensionality is high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Remark 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='5 (Suboptimality of UCB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Using the traditional Eluder UCB algorithm leads to a suboptimal sample complexity of � O(d2k/ǫ2) when the leader solely uses the response observa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Still, this is a factor d improvement compared to what she can achieve with UCB without the response observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 11 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Failure of the optimism principle The next example is adapted from the ReLU bandit in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2, and shows that optimism- based method can have dramatic suboptimality in certain problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Example 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A = Bd−1, B = Bd−1 × [0, 1], and Θ = {(θ−d, θd) | θ−d ∈ Bd, θd = 1 − ∆} (24) for some ∆ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider the linear family HΘ,φ with φ(a, b) = ∥a∥((1 − bd)a, bd − ∥b−d∥) + 1 − ∥a∥ 2 (b−d, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (25) For any θ ∈ Θ with θ−d ∈ Sd−1, the optimal action for the leader is θ−d, with the follower best responding (0, 0) and achieving unit expected reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When ∥a∥ = 1, this function behaves exactly as in Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2, where b∗ θ(a) = (0, 1) whenever θ−d · a < 1 − ∆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When a = 0, the best response is b∗ θ(0) = (θ−d, bd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, if the response observations are noiseless, the leader learns the true parameter and hence the optimal action in one round by playing a1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' However, any optimism-based method such as UCB will not achieve such efficient learning, even when the response are noiselessly observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It is straightforward to verify that, for any action a with ∥a∥ < 1, the optimistic reward satisfies sup θ∈Θ hθ(a) < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (26) Thus, as long as the confidence set contains some θ with θ−d ∈ Sd−1, which holds under our initial condition, optimism causes the leader to only take actions a ∈ Sd−1, reducing the problem to the worst-case Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 6 Conclusions We have studied a model of online learning in decentralized cooperative Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We showed that, even with an omniscient follower who always best responds (myopically), the worst case sample complexity for a linear family can be as large as exp(Θ(d log 1 ǫ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This “curse of expertise” highlights the challenge caused by miscoordinated exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This also raises the question of how a non-myopic expert follower should respond to the leader’s actions (without knowing the leader’s exact algorithm) to expedite their learning and maximize their long-term reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We considered the UCB-type algorithm that incorporates response observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A few examples of various hardness were considered, ranging from efficient learning through imitation and guided exploration to the worst-case linear family example with an exponential sample complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Besides the examples considered in the paper, there are numerous scenarios where the roles of the leader and the follower are more complex to reason about.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This poses unique challenges for both the learning process of the leader and the subsequent analysis of regret, indicating a fertile ground for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Specifically, our current template of Algorithm 1 requires designing the confidence sets based on the specific response-reward structure of each problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It remains open to find a general design (or prove the lack thereof) that systematically synthesizes the response and reward observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A general framework of analysis that can provide a unified yet sharp upper bound on the examples is also valuable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 12 References Yasin Abbasi-Yadkori, D´avid P´al, and Csaba Szepesv´ari.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Improved algorithms for linear stochas- tic bandits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in neural information processing systems, 24, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Finite-time analysis of the multiarmed bandit problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Machine learning, 47(2):235–256, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Yu Bai, Chi Jin, Huan Wang, and Caiming Xiong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Sample-efficient learning of Stackelberg equilibria in general-sum games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34: 25799–25811, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Vincent Conitzer and Tuomas Sandholm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Computing the optimal strategy to commit to.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Proceedings of the 7th ACM Conference on Electronic Commerce, pages 82–90, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Strategic classification from revealed preferences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55–70, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Kefan Dong, Jiaqi Yang, and Tengyu Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Provable model-based nonlinear bandit and reinforce- ment learning: Shelve optimism, embrace virtual curvature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:26168–26182, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Jacques Ferber and Gerhard Weiss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Multi-agent systems: an introduction to distributed artificial intelligence, volume 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Addison-wesley Reading, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Jerzy Filar and Koos Vrieze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Competitive Markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Springer Science & Busi- ness Media, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Dylan J Foster, Sham M Kakade, Jian Qian, and Alexander Rakhlin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The statistical complexity of interactive decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='13487, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Matthias Gerstgrasser and David C Parkes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Oracles & followers: Stackelberg equilibria in deep multi-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='11942, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Michael A Goodrich, Alan C Schultz, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Human–robot interaction: a survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Foundations and Trends® in Human–Computer Interaction, 1(3):203–275, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Strategic classi- fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Proceedings of the 2016 ACM conference on Innovations in Theoretical Computer Science, pages 111–122, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Chien-Ju Ho, Aleksandrs Slivkins, and Jennifer Wortman Vaughan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Adaptive contract design for crowdsourcing markets: Bandit algorithms for repeated principal-agent problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Proceedings of the fifteenth ACM conference on Economics and computation, pages 359–376, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D Lee, Qi Lei, Runzhe Wang, and Jiaqi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Optimal gradient-based algorithms for non-concave bandit optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:29101–29115, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Hsu Kao, Chen-Yu Wei, and Vijay Subramanian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Decentralized cooperative reinforcement learning with hierarchical information structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In International Conference on Algorithmic Learning Theory, pages 573–605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' PMLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Robert Kleinberg and Tom Leighton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The value of knowing a demand curve: Bounds on regret for online posted-price auctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In 44th Annual IEEE Symposium on Foundations of Computer Science, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', pages 594–605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' IEEE, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 13 Jens Kober, J Andrew Bagnell, and Jan Peters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Reinforcement learning in robotics: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The International Journal of Robotics Research, 32(11):1238–1274, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' John Langford and Tong Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The epoch-greedy algorithm for contextual multi-armed ban- dits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in neural information processing systems, 20(1):96–1, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, and Ufuk Topcu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' No-regret learning in dynamic Stackelberg games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='04786, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Continuous control with deep reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:1509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='02971, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Yang Liu and Yiling Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A bandit framework for strategic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Janusz Marecki, Gerry Tesauro, and Richard Segal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Playing repeated Stackelberg games with unknown opponents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 821–828, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, and Kannan Ramchandran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' On the value of interaction and function approximation in imitation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:1325–1336, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Daniel Russo and Benjamin Van Roy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Eluder dimension and the sample complexity of optimistic exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 26, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Ahmad EL Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Deep reinforce- ment learning framework for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Electronic Imaging, 2017(19):70–76, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Shai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Safe, multi-agent, reinforcement learning for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='03295, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Milind Tambe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Cambridge University Press, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Heinrich von Stackelberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Market Structure and Equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Springer Science & Business Media, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Martin J Wainwright.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' High-Dimensional Statistics: A Non-Asymptotic Viewpoint, volume 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Cambridge University Press, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Chih-Chun Wang, Sanjeev R Kulkarni, and H Vincent Poor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Bandit problems with side obser- vations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' IEEE Transactions on Automatic Control, 50(3):338–355, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Michael Wooldridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' An introduction to multiagent systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' John wiley & sons, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Annie Xie, Dylan Losey, Ryan Tolsma, Chelsea Finn, and Dorsa Sadigh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Learning latent representations to influence multi-agent interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In Conference on robot learning, pages 575–588.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Boling Yang, Liyuan Zheng, Lillian J Ratliff, Byron Boots, and Joshua R Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Stackelberg maddpg: Learning emergent behaviors via information asymmetry in competitive games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Yaolong Yu, Haifeng Xu, and Haipeng Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Learning correlated Stackelberg equilibrium in general-sum multi-leader-single-follower games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='12470, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 14 Kaiqing Zhang, Zhuoran Yang, and Tamer Ba¸sar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Multi-agent reinforcement learning: A selec- tive overview of theories and algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Handbook of Reinforcement Learning and Control, pages 321–384, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Han Zhong, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Can reinforcement learning find Stackelberg-Nash equilibria in general-sum Markov games with myopic followers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='13521, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Banghua Zhu, Stephen Bates, Zhuoran Yang, Yixin Wang, Jiantao Jiao, and Michael I Jordan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The sample complexity of online contract design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='05732, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 15 A Proofs in Section 3 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Consider Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The expected reward is given by hθ(a, b) := θ · φ(a, b) = (1 − b)θ−d · a + b(1 − ∆), (27) Optimizing over b ∈ [0, 1] yields hθ(a) = max{1 − ∆, θ−d · a}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (28) Note that for any a ∈ A such that θ−d · a < 1 − ∆, the best response of the follower is b = 1, yielding an expected reward of 1−∆;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' for any a ∈ A such that θ−d ·a ≥ 1−∆, the best response of the follower is b = 0, yielding an expected reward of θ−d · a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The optimal joint response a = θ−d and b = 0 achieves the optimal expected reward of ∥θ−d∥ = 1 > 1 − ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' From the leader’s perspective, this now reduces to the problem of a ReLU bandit considered in Dong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021], since the response provides no information until the average regret falls below ∆.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus we have inf ˆπ sup θ∈Θ R(T) ≥ Ω(T 1− 1 d−2 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let H(ǫ) be a minimal ǫ-covering of H under the metric ∥ · ∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A(ǫ) = � arg max a∈A max b∈B h(a, b) | h ∈ H(ǫ) � , where we break ties arbirarily when the optimal action is non-unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that we have |A(ǫ)| ≤ |H(ǫ)| ≤ N(ǫ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let h⋆ be the true reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By the definition of a covering, there exists some hǫ ∈ H(ǫ) such that ∥h⋆ − hǫ∥∞ ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus we have R(T) = T � t=1 E[h ⋆(a∗) − h ⋆(at)] ≤ ǫT + T � t=1 E[h ⋆ ǫ(a∗) − h ⋆ ǫ(at)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We know that the optimal action for hǫ must be inside the set A(ǫ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus any worst-case optimal no-regret algorithm on the set A(ǫ) gives a regret of � |A(ǫ)|T ≤ � N(ǫ)T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This gives that R(T) ≤ ǫT + � N(ǫ)T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Taking infimum over ǫ finishes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' B Proofs in Section 4 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Proof of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='5 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Recall the notation from Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4: let θ(b) t = ΠA(ˆθt) for t ≥ 2, with ˆθt := 1 t−1 �t−1 i=1 ˆbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The first round incurs at most a constant regret and can be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It suffices to show that, with probability at least 1 − δ, ∥θ − θ(b) t ∥ ≤ αt √ t (29) 16 for αt = Θ � σb � d + log T δ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' First, we bound the distance between ˆθt and θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By our assumption, ∥ˆθt − θ∥ = ��� 1 t − 1 t−1 � i=1 wi ���, where w1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' , wt are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' zero-mean σb-sub-Gaussian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We proceed using a covering argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Construct U ⊆ Sd−1 such that inf v∈Sd−1 sup u∈U u · v ≥ 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (30) Note that ∥u − v∥ = √2 − 2u · v for u, v ∈ Sd−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Hence, equivalently, we may choose U as a minimal 1-covering of Sd−1 in Euclidean metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Then log |U| ≤ log N int(Sd−1, 1, ∥ · ∥) ≤ log M(Bd, 1, ∥ · ∥) = Θ(d), (31) where N int and M denote the internal covering number and the packing number of the space under a given metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The choice of U ensures that ∥w∥ ≤ 2 sup u∈U u · w (32) for all w ∈ Rd, and ignoring the constant factor, we may focus on upper bounding supu∈U �t−1 i=1 u· wi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' For each choice of u ∈ U, let Zu,i = u · wi, so that Zu,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' , Zu,t−1 are i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' zero-mean σb-sub-Gaussian by definition of sub-Gaussian random vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By Hoeffding’s inequality for sub-Gaussian random variables, we have P � t � i=1 Zu,i > x � ≤ exp � − x2 2tσ2 b � (33) for all x > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Applying union bound over U and using (32) gives P ����� t � i=1 wi ���� ≥ 2x � ≤ P � sup u∈U t � i=1 Zu,i ≥ x � ≤ |U| exp � − x2 2tσ2 b � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (34) Choosing x = σb � 2t log(|U|T) ≲ σb � t(d + log T δ ) ensures that, by another union bound over t ∈ [T], ∥ˆθt − θ∥ ≲ σb � t−1� d + log T δ � (35) with probability at least 1 − δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By the triangle inequality and the definition of projection, ∥θ(b) t − θ∥ ≤ ∥θ(b) t − ˆθt∥ + ∥ˆθt − θ∥ ≤ 2∥ˆθt − θ∥ ≲ σb � t−1� d + log T δ � (36) with the same probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This gives (29) and completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' We will condition upon the validity of the confidence sets, which happens with probability at least 1 − δ per our choice of {αt}t∈[T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' 17 UCB always chooses at in the confidence set Θt, with radius of order O � σb � t−1(d + log T δ ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' When θ⋆ ∈ Θt, we have ∥at − θ⋆∥ ≲ σb � t−1(d + log T δ ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Since both at and θ⋆ are unit vectors, we have RUCB(T) ≤ 2δT + T � t=1 � 1 − θ⋆ · at � = 2δT + 2 + 1 2 T � t=1 ∥θ⋆ − at∥2 ≲ 2δT + T � t=2 σ2 b t � d + log T δ � = O � δT + σ2 b log T · � d + log T δ �� , where the term 2δT bounds the contribution of the event that the confidence sets fails to be all valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Choosing δ = 1/T gives our desired bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='10 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' After the first round, the leader’s task reduces to a linear bandit with action space A1: only actions within A1 will be played, and the reward is linear in this region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' As is well known for linear bandit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', Russo and Van Roy [2013]), with probability 1 − δ, the regret in this linear stage (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', excluding the first round) is upper bounded by 2δT + O �� d log T · (d log T + log δ−1) · T � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The first round adds at most a constant to this and can be ignored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By choosing δ = T −1, we have RUCB(T) ≤ �O(d √ T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (37) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='4 Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='11 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let Θ1 = {θa ∈ Sd−1|θa · b1 ≥ ζ} × {b1}, and denote the true parameter by θ⋆ = (θ⋆ a, θ⋆ b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By our assumption on the problem structure, we have θ⋆ a ∈ Θ(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' As in the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='3, let Θ(ǫ) be a minimal ǫ-covering of Θ1 in Euclidean metric, with ǫ > 0 to be specified later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' In particular, there is some ˜θa ∈ Θ1 with ∥˜θa − θ⋆ a∥ ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let A(ǫ) = {arg maxa∈A ReLU(θa · a − ∆) | θa ∈ Θ(ǫ)}, where we break tie arbitrarily when the optimal action is non-unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that |A(ǫ)| ≤ |Θ(ǫ)| = N(Θ1, ǫ, ∥ · ∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Now, let the leader play UCB on the discrete action set A(ǫ) after the first round.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The regret satisfies R(T) ≤ 1 + T � t=2 E � h ⋆(a∗) − h ⋆(at) � ≤ 1 + T · E � h ⋆(a∗) − h ⋆(˜a∗) � + T � t=1 E � h ⋆(˜a∗) − h ⋆(at) � , (38) where a∗ = θ⋆ a and ˜a∗ ∈ arg maxa∈A(ǫ) h ⋆(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that h ⋆(˜a∗) ≥ h ⋆(˜θa) ≥ h ⋆(a∗) − ǫ by our choice of ˜θa and A(ǫ), the second term in (38) is at most ǫT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' The third term, the regret of UCB on A(ǫ), is bounded by O( � N(Θ1, ǫ, ∥ · ∥) · T) in expectation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' It remains to bound N(Θ1, ǫ, ∥ · ∥).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that for any θa, θ′ a ∈ Θ1, we have θa · θ′ a = (θa · b1)(θ′ a · b1) + (θa − (θa · b1)b1) · (θ′ a − (θ′ a · b1)b1) ≥ ζ2 − ∥θa − (θa · b1)b1∥∥θ′ a − (θ′ a · b1)b1∥ ≥ ζ2 − (1 − ζ2) = 2ζ2 − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Equivalently, ∥θa − θ′ a∥ = � 2 − 2θa · θ′a ≤ 2 � 1 − ζ2 = 2Cζ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, the covering number of Θ1 is upper bounded by � KCζ ǫ d) for some absolute constant K, which yields a regret bound 18 of 1 + ǫT + O( � KdCd ζ T/ǫd).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Choosing ǫ ≍ (KCζ) d d+2T − 1 d+2 reduces this upper bound to O � C d d+2 ζ T d+1 d+2 � as desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' C Proofs in Section 5 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='1 Proof of Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Let the leader run the phased elimination algorithm Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021, Algorithm 6] using the response b∗ θ(at) as the proxy reward to maximize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' This proxy reward, in expectation, is a homogeneous polynomial of degree 2k − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' By Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='16 in Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' [2021], the algorithm achieves �R(T) ≤ �O �√ d2k−1T � , (39) where �R(T) = �T t=1 1 − b∗ θ(at) is the proxy regret measured based on the the proxy reward (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=', absolute response).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Note that the reward is maximized exactly when the proxy reward is maximized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' Thus, the Lipschitz property (19) suggests that R(T) ≤ 2k 2k − 1 �R(T) ≤ �O( √ d2k−1T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} +page_content=' (40) 19' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/idFJT4oBgHgl3EQfWizT/content/2301.11518v1.pdf'} diff --git a/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf b/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..71dbd6b7944926f2ba83f0193a4ac94cd26eaa46 --- /dev/null +++ b/itFKT4oBgHgl3EQfvy5I/content/2301.11896v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:acd2906e7fcc756e85ba6d9dad6c924612afb42ba51a7a23080ad73694ca6128 +size 886881 diff --git a/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss b/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..3aca23907af9ce27f23c4d5d544ba7d832a007ea --- /dev/null +++ b/itFKT4oBgHgl3EQfvy5I/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec9d71f9b087c06e75a420ebe3db000ef4c92a54102a2360a0375bc5ac8a2ee6 +size 3604525 diff --git a/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf b/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0d96e778b30f8308a7e655a81bd7cdde90f74d3a --- /dev/null +++ b/j9AyT4oBgHgl3EQfyPkN/content/2301.00679v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:803b66bb29a9d1b4d273bd6d798ecaeb2f35ea06c58d73a89efe9e34e1e7b136 +size 154990 diff --git a/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..be99b86f8b2ea415712adbcc5b0734ef62800758 --- /dev/null +++ b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43c43e6cceab14c15b454dbaf53512c43d4d54698091c0a9a76d0e5e80a8c7dd +size 1048621 diff --git a/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..07121638590d7acc6e67a6099b6ed8871419cc68 --- /dev/null +++ b/j9AyT4oBgHgl3EQfyPkN/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34ccf52bfec96c6e15fd06eb3755da4ef473928664039d58a2ce16fb399d6163 +size 40817 diff --git a/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf b/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f444c0a8eb571b41abe7e69393cc8b14a07d8ecf --- /dev/null +++ b/j9FQT4oBgHgl3EQfmDaC/content/2301.13364v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c1e1315be3a3ca4fad6f8383965c2767c975a3ecca7b861b5af2860a0e657c5d +size 930892 diff --git a/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..82052b7922274798ea3c36e901fea0d0d9d3bda1 --- /dev/null +++ b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ab6d14902e4da23f1ccaea09664f61818444a93ac1db02e4485b7c9b6a650b22 +size 3735597 diff --git a/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..fd9ea55556ab82b3cf15cca809133345acef6fbf --- /dev/null +++ b/j9FQT4oBgHgl3EQfmDaC/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5c5a1e77566f9e7d6fff2e0a06b58e2dc996c35e8cb6e706c308336ed90d375a +size 149226 diff --git a/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf b/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..225eb6920f88ea7e21a280785ebe3c9866aa1260 --- /dev/null +++ b/kdFQT4oBgHgl3EQfmjaf/content/2301.13366v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1edddae4983d066a283ad0f9a299e57258491cf810bb76a1c18015ad69eb945 +size 1128650 diff --git a/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..6cd462cffb5f841b09b8c373c620e3a440c918de --- /dev/null +++ b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:52190547b3f514076418c0f248e3ee1d4f090b2be9bd44b03ea78138df3b2866 +size 5111853 diff --git a/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..07ad7766a73e1c6cd8c224de330847e0c33e031c --- /dev/null +++ b/kdFQT4oBgHgl3EQfmjaf/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7889551214ef1007d9d3f3c7fcd23cf4e7941dc5c8c98b79ceb99b260a2b4322 +size 159555 diff --git a/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf b/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..cad2779f13fdc765eff5fa8345558fe7621b4e69 --- /dev/null +++ b/lNFPT4oBgHgl3EQf2zXD/content/2301.13188v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10e53f4e6284b0653b85fff9426898d9799caccaf5379b24f9f0de00719caacf +size 9410277 diff --git a/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf b/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..86933dc8d98c5e098deda4cd7624b463b49782fe --- /dev/null +++ b/m9E1T4oBgHgl3EQf1QUs/content/2301.03465v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de62c26ab227069b0cdcd3461633e81b66cd1ab0cb6bd222104d516faac7099f +size 3524269 diff --git a/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl b/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..dc2552fe00e0438049b7a608a4555ca1dbd13876 --- /dev/null +++ b/n9E0T4oBgHgl3EQfZgDd/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e74707412d91b38956233795034740640274d368dad30cc143de8761a43a74a +size 247084 diff --git a/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss b/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..66c7ea250fc85f8b3cb70e5397918ac1c0eaefde --- /dev/null +++ b/n9E3T4oBgHgl3EQfLAl3/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e14945d05ebd62cf59061568964ad20a4a42513b877d9117bd5f2197ebb63977 +size 1376301 diff --git a/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..d3d8a6ec51118dc6ecd3dc5aa1711363840178cd --- /dev/null +++ b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/2301.08373v1.pdf.txt @@ -0,0 +1,1353 @@ +Turing pattern or system heterogeneity? A +numerical continuation approach to assessing the +role of Turing instabilities in heterogeneous +reaction-diffusion systems +Jacob C. Vandenberg∗ +Mark B. Flegg† +January 23, 2023 +Abstract +Turing patterns in reaction-diffusion (RD) systems have classically +been studied only in RD systems which do not explicitly depend on in- +dependent variables such as space. In practise, many systems for which +Turing patterning is important are not homogeneous with ideal boundary +conditions. In heterogeneous systems with stable steady states, the steady +states are also necessarily heterogeneous which is problematic for applying +the classical analysis. Whilst there has been some work done to extend +Turing analysis to some heterogeneous systems, for many systems it is still +difficult to determine if a stable patterned state is driven purely by sys- +tem heterogeneity or if a Turing instability is playing a role. In this work, +we try to define a framework which uses numerical continuation to map +heterogeneous RD systems onto a sensible nearby homogeneous system. +This framework may be used for discussing the role of Turing instabili- +ties in establishing patterns in heterogeneous RD systems. We study the +Schnakenberg and Gierer-Meinhardt models with spatially heterogeneous +production as test problems. It is shown that for sufficiently large system +heterogeneity (large amplitude spatial variations in morphogen produc- +tion) it is possible that Turing-patterned and base states become coinci- +dent and therefore impossible to distinguish. Other exotic behaviour is +also shown to be possible. We also study a novel scenario in which mor- +phogen is produced locally at levels that could support Turing patterning +but on intervals/patches which are on the scale of classical critical do- +main lengths. Without classical domain boundaries, Turing patterns are +allowed to bleed through; an effect noted by other authors. In this case, +this phenomena effectively changes the critical domain length. Indeed, we +even note that this phenomena may also effectively couple local patches +together and drive instability in this way. +∗School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia. +†School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia. +1 +arXiv:2301.08373v1 [math.AP] 20 Jan 2023 + +1 +Introduction +The reaction-diffusion (RD) equation is a nonlinear partial differential equation +which exhibits extraordinary diverse behavior observed particularly in the life +sciences [13, 12, 10]. It models the concentration of different species in time as +they interact whilst diffusing in space relative to each other. The species of the +system could refer to a chemical species, biological species or ecological species, +amongst other possibilities [8]. +Under certain conditions, solutions to the RD equation can have an insta- +bility which is “driven by diffusion”. This is called a Turing instability, which is +usually defined as follows. Turing instabilities occur when an RD system has a +spatially-uniform steady state which is unstable in the presence of diffusion, but +stable in the absence of diffusion. Alan Turing’s seminal paper analyses Turing +instabilities as a mechanism for explaining the emergence of spatial heterogene- +ity in diffuse biological chemical systems [14]. The reason Turing instabilities +can explain this onset of heterogeneity is because they typically produce Turing +patterns. Turing patterns are stable solutions to the RD equation which have +large spatial oscillations, and are stationary in time. Usually diffusion has the +effect of “flattening” the solution. In this case, however, diffusion is what causes +the system to deviate away from uniformity. +Often, RD models are spatially homogeneous in the sense that the RD PDE +does not explicitly contain the spatial variable x (or t). Typically, RD models +which exhibit Turing patterning are studied as homogeneous systems to sim- +plify the analysis of the PDE (finding steady states, performing linear stability +analysis, demonstrating the potential for patterning etc.). At the same time, +most real world applications almost certainly contain spatial variation in model +parameters. Consider, for example, the patterning and development of digits, +kidneys and lungs where homogeneous models are analysed for the presence of +Turing instabilities despite there being obvious spatial heterogeneity in mor- +phogen production rates [7, 11]. +Understanding Turing patterning in the presence of spatially heterogeneous +RD PDEs is not well understood and surprisingly has received very little at- +tention in the literature. Perhaps, one of the reasons for this is that Turing +analysis of spatially heterogeneous RD PDEs is challenging as it is not even +necessarily apparent even how Turing instabilities should be defined. To begin, +the unstable uniform steady state required for defining the Turing instability +does not exist by definition for spatially heterogeneous RD PDEs. +The analysis by Krause et al. presents a general stability theory for a hetero- +geneous RD PDE. This paper is however limited to cases where heterogeneity +varies slowly almost everywhere relative to the domain size [6]. In the paper, +Krause et al. define a ‘base state’ solution which replaces the notion of the uni- +form steady state which has been ‘flattened’ by diffusion. The base state, which +must be a stationary solution to the PDE, has certain properties. Importantly, +the base state does not have spatial oscillations with periods much smaller than +the inhomogeneity in the PDE (it is nice and ‘diffused’). Aside from this defi- +nition being vague, it is not clear that it should be the case if the PDE contains +2 + +heterogeneities which vary on the same spatial scale as the Turing patterns for +the system. This is because it is not easy to distinguish between patterned and +base states if oscillations in the patterned state are on the same spatial scale as +the base state. We shall also be adopting the term ‘base state’ but attempting +to find a more general approach to finding it. +Another method which has been widely used in the literature is to limit the +scope of the study to more specific examples. This includes choosing specific +reaction terms such that an exact solution can be computed [9, 1]. At this point, +a stability analysis similar to the classical analysis can be performed. Using a +linear reaction term is common [9, 5], but nonlinear reaction terms can also be +considered [1]. Truncated Galerkin expansions of the solution have been used +to study the stability of heterogeneous problems [4, 5]. These too use specific +examples to find base states analytically. +No insight is given as to why the +solutions that were found should be analogous to the uniform base state in the +homogeneous case. +In this manuscript, our aim is to investigate a method which may be used +to find base states for heterogeneous reaction-diffusion PDEs. The stability of +these base states may be used to define Turing patterns. We propose a method +for describing base states and apply this method to the canonical Schnaken- +berg (substrate depletion) system as well as the Gierer-Meinhardt (activator- +inhibitor) system. In both of these systems we allow the production of species +to vary in space. We focus on two main curiosities. The first deals with critical +phenomena which place limitations on when a base state may be defined and +the second deals with the onset of critical domain lengths for Turing instabilities +in the presence of heterogeneous production. +2 +Methods +The classical spatially-homogeneous dimensionless reaction-diffusion system is +∂u +∂t = D∇2u + γF(u), on Ω, +(1) +∇u · n = 0, on ∂Ω. +(2) +Here u is a vector containing the concentration of model species/chemicals, D +is a diagonal matrix of diffusion constants (with D11 = 1 providing a charac- +teristic timescale for nondimensionalisation), and F is a nonlinear vector-valued +function describing the possible sources and sinks of, and reactions between, the +species. The domain Ω (which has an outward normal vector n) has been scaled +through non-dimensionalisation so that the spatial scale of the system relative +to that of diffusion is described by the magnitude of γ. +A Turing analysis of this system begins by finding the uniform steady state +solution u⋆ such that F(u⋆) = 0. Indeed this uniform state is a solution to the +model because derivatives of u⋆ (a constant) is zero. Subsequently, a Turing +pattern is formed when the solution u⋆, which is stable if D = 0, is unstable. +3 + +The uniform solution to the model u⋆ will be called the base state and in hetero- +geneous problems loses its uniformity. This is the natural, diffusion-flattened, +state of the system. +We can extend the RD model to account for explicit spatial variation +∂u +∂t = div(D(x)∇u) + γF(u, x), on Ω, +(3) +∇u · n = 0 on ∂Ω. +(4) +If we were to proceed as before, we can take u⋆(x) which satisfies F(u⋆(x), x) = +0 for all x ∈ Ω. The diffusion term div(D(x)∇u⋆(x)) is not zero in general, +which would mean u⋆(x) is not a steady state solution of Equation (3). Thus, it +does not make sense to analyse its stability. So in order to extend the definition +of a Turing instability, we need to find a different base state u⋆(x) which satisfies +the steady state problem for Equations (3) and (4) but also should not be called +a Turing pattern. Whilst a ‘pattern’ is often defined as any stable stationary +heterogeneous solution, we reserve the definition of pattern in this manuscript +to describe any stationary heterogeneous state separate to the base state. +As it stands, there is no conventional way of finding or defining more gen- +erally what this base state is. The only thing that can be said about the base +state u⋆(x) is that it should be somehow sensibly analogous to the uniform base +state described for the homogeneous system. +We will narrow the scope of our efforts to investigate this system to the case +where heterogeneity is in the reaction term only. Specifically, we look at systems +with heterogeneous production rates of each species as we believe that this +system is ubiquitous in biological application where morphogen is deferentially +expressed in space but reactions between morphogens are autonomous as one +might expect. Thus, the form of the RD equation that we will be analysing is as +follows and splits F up into autonomous, homogeneous ˆF and heterogeneous G +components. How this partition should be done appropriately and uniquely we +will discuss here, outlining the approach that we have taken, but we will justify +this approach in Section 2.1. +∂u +∂t = D∇2u + γ +� +ˆF(u) + G(u, x) +� +, on Ω, +(5) +∇u · n = 0 on ∂Ω. +(6) +To analyse this system, we will find it useful to ‘grow’ the heterogeneous com- +ponents by means of a parameter θ by defining the parameterised problem +∂u +∂t = D∇2u + γ +� +ˆF(u) + θG(u, x) +� +, on Ω, +(7) +∇u · n = 0 on ∂Ω. +(8) +Importantly, the parameter θ in these models describe the amplitude of the +heterogeneity in the system and when θ → 0 a classical system is recovered and +when θ → 1 the full heterogeneous problem is recovered. Importantly, as θ may +4 + +be thought of as the amplitude of the heterogeneity and easily absorbed into G, +it is possible to also think of θ growing beyond 1 and simply forming part of a +growing G in Equations (5) and (6). +Whilst there is freedom in the choice of the partition of F in Equation (3) +into G and ˆF in Equation (5), we find it appropriate to uniquely define G and +ˆF for a given F in the following way. +ˆF = 1 +|Ω| +� +Ω +F(u, x) ∂x, +(9) +G = F − ˆF. +(10) +This is a convenient choice when the reaction term can be decomposed into +a spatially-independent coupling term and a spatially-dependent source term, +resulting in the following. +F(u, x) = ˆF(u) + G(x), +where the average value of G is 0. Furthermore, by using this decomposition for +F, we ensure that for each θ the parameterised system (Equation (7)) adheres +to the same decomposition rules whilst at the same time capturing autonomous +reactions in F within ˆF and often it is these terms which are the characteristi- +cally important ingredients in the Turing behaviour of the system (noting that +F → ˆF as θ → 0). +2.1 +Base states +In this section, we attempt to redefine the base state of a heterogeneous reaction- +diffusion system as a parameterised continuation of a nearby homogeneous sys- +tem. +A necessary condition on the base state of a reaction-diffusion system +(Equations (7) and (8)) is that it must be a stationary solution, against which +stability can be later checked. +The base state of Equations (7) and (8) shall be labelled as u⋆ +θ(x) (and +sometimes as u⋆ +θ(x; θ) to highlight dependence on the parameter θ). We have +that u⋆ +θ(x) is a solution to +D∇2u + γ +� +ˆF(u) + θG(u, x) +� += 0 on Ω, +(11) +∇u · n = 0 on ∂Ω. +(12) +Since the base state should become the uniform steady state as θ → 0, we have +that u⋆ +0 ∈ RNs (where Ns is the number of species in the model) is constant in +x and ˆF(u⋆ +0) = 0. +It makes sense to represent Equations (11) and (12) as the single equation +Φ(u, x, ¯x; θ) = +� +D∇2u(x) + γ +� +ˆF(u(x)) + θG(u(x), x) +� +, ∇u(¯x) · n +�⊤ += 0, +(13) +5 + +where x ∈ Ω and ¯x ∈ ∂Ω. +In order to label a solution to 13 as a base solution, we will further require +that it varies continuously with respect to θ. In this way, the base states of +the system are tied, via continuation of the parameter θ, to the base state u⋆ +0 +(uniform steady state) of an associated homogeneous system (as θ → 0). +To ensure the existence of u⋆ +θ for some θ ̸= 0, we can find some η > 0 +and u⋆ +θ : (−η, η) → C2(Ω, RNs) such that u⋆ +θ uniquely solves 13 and u⋆ +0 solves +ˆF(u⋆ +0) = 0. The value η provides a region where any −η ≤ θ ≤ η is guaranteed to +have a base state solution. Outside of (−η, η), the amplitude of the heterogeneity +may become so large that it is not possible to draw a continuation from u⋆ +0. +We define the Jacobian +¯Jθ(u, x) = ∂Φ +∂u = (Jθ(u, x), n · ∇)⊤ +(14) += +� +D∇2 + γ +� +jˆF(u) + θjG(u, x) +� +, n · ∇ +�⊤ . +(15) +Here jˆF(u) and jG(u, x) are the Jacobians of ˆF and G respectively. For +continuity and uniqueness of u⋆ +θ in θ at θ = 0 by the Implicit Function Theorem +(IFT) [3], we require ¯J0 to reversible at u⋆ +0 and therefore, we require that jˆF(u⋆ +0) +is nonsingular. +Singularity in ¯Jθ allows for the possibility that θ may become too large in +magnitude for there to exist a defined base state u⋆ +θ. It’s unclear in general +how large a heterogeneity (θ) can get before the base state either stops existing +or is not unique, or even if the base state is even bound in this way at all. +Defining the base state outside of some potential maximum range θ− < θ < θ+, +is problematic and in our framework not (yet) possible. The values of θ− and +θ+ coincide with folds in the solution to Equations (11) and (12) characterised +by singularities in ¯Jθ− and ¯Jθ+. +Definition 1 (Spatially-dependent Turing base state). For each u0 ∈ RNs such +that ˆF(u0) = 0 we define the associated spatially-dependent Turing base state (or +just base state) for Equation (5) as follows. If there exists u⋆ +θ(x; θ) ∈ C1(Ω × +(0, 1], RNs) which is a steady state solution to Equation (7) for all θ ∈ (0, 1] +and where u⋆ +0(x; 0) = u0. Then u⋆ +1(x) is a Turing base state to the spatially- +dependent RD system (Equations (5) and (6)) associated with the uniform base +state u⋆ +0(x). +Defining the base state in this way is a natural extension of the classical +homogeneous case, since the heterogeneous base state should not deviate too far +from the uniform one in the situation where the amplitude of the heterogeneity +in the system is small. In other words, if heterogeneity in the system is small, +we would expect that the base state should be almost ‘flat’ from diffusion. +As an important note, we have chosen to define ˆF and G using Equations +(9) and (10), in doing so we ensure that all autonomous terms in F (for example +reaction kinetics between species which drive Turing instabilities) are encapsu- +lated in ˆF. Clearly, it is possible to simply define G = F and ˆF = 0. With +6 + +this choice, we immediately see that jˆF(u⋆ +0) is singular and continuation to the +heterogeneous base state is impossible. +In the case where ˆF ̸= 0, we have +¯J0(u⋆ +0, x) = ∂Φ +∂u +���� +u=u⋆ +0,θ=0 += +� +D∇2 + γJˆF(u⋆ +0), n · ∇ +�⊤ . +We apply this to cj ˆwm, where cj ∈ RNs is the jth eigenvector of Am = −Dk2 +m + +γJˆF(u⋆ +0) and ˆwm is the eigenfunction solving ∇2 ˆwm = −k2 +m ˆwm on Ω with ∇ ˆwm· +n = 0 on ∂Ω. This gives us the following, +� +D∇2cj ˆwm + γJˆFcj ˆwm, n · ∇cj ˆwm +�⊤ = (Amcj ˆwm, 0)⊤ += (λj(Am), 0)⊤ cj ˆwm, +where λj(Am) is the eigenvalue associated with the eigenvector cj. This eigen- +value determines the stability of the eigenvector cj ˆwm. So if any eigenvector cj +has a corresponding λj(Am) = 0, the operator ¯J0 will not be invertible and the +conditions for the IFT would not be satisfied. +The continuation of base states from θ = 0 cannot proceed unless G(u⋆ +0, x) +is orthogonal to every eigenvector in the null space of the adjoint operator ∂Φ +∂u +∗. +That is, +� +Ω +G(u⋆ +0, x)⊤v dx = 0, ∀v ∈ null +� +D∇2 + γJ⊤ +ˆF +� +. +This is a result of Fredholm’s alternative [2]. This solvability condition is not +guaranteed. So for any chosen parameterisation, there may still be cases where +continuation is impossible about θ = 0. +We have chosen to multiply the heterogeneity G by a parameter θ. Of course, +this parameterisation of heterogeneity (θ = 1) from the associated homogeneous +system (θ = 0) is not unique. In Equations (7) and (8) we increase the size of the +heterogeneity linearly with the parameter θ. A more general parameterisation +could be +∂u +∂t = D∇2u + γˆF(u) + γG(u, x; θ), +(16) +provided that ˆF(u) + G(u, x; 1) ≡ F(u, x), and G(u, x; 0) ≡ 0. +The IFT only provides information about the existence and uniqueness of +the base state solution branch locally. The existence and uniqueness of the base +state solution at θ = 1 is unknown a priori. In particular, it is unknown whether +changing the parameterisation of G will lead to a change in the base state or +the existence of the base state. For this, a global homotopy result would be +required. +The analysis by Krause et al. +gives general stability theory for a large +perturbation in the limit as γ approaches ∞ [6]. However, little attention is given +on redefining the base state for the Turing instability. The analysis assumes that +7 + +a steady state solution to the full RD equation (Equations (3) and (4)) exists, +and that this solution has certain properties. +The first property is that the +solution does not have spatial oscillations on the scale O(1/ϵ). This is an a +posteriori assumption, since no method is provided for determining whether +the base state u⋆(x) has O(1/ϵ) oscillations without first finding u⋆(x). Since +the heterogeneous RD equation is nonlinear in general, finding such a solution +is non-trivial. Finally, it is assumed that F satisfies the boundary conditions +∂u +∂x = 0 at x = 0, 1. +2.2 +Case studies +In our numerical investigation, we focus attention on two popular models; the +Schnakenberg model and the Gierer-Meinhardt model. In their standard ho- +mogeneous forms, the Schnakenberg model is widely studied as a substrate de- +pletion Turing system whilst the Gierer-Meinhardt model is a typical activator- +inhibitor Turing system. In both of these cases we consider only one-dimensional +domains Ω ∈ (0, 1) on which to solve the PDEs and on the boundaries each of +the species have no-flux conditions. +2.2.1 +Schnakenberg model +The parameterised heterogeneous Schnakenberg model we will be using is as +follows. +∂u +∂t = ∇2u + γ +� +−uv2 + β(x) +� +, +(17) +∂v +∂t = d∇2v + γ +� +uv2 − v + η(x) +� +. +(18) +Here d represents the relative diffusion of the activator v compared to that of +the substrate u whilst β and η are spatially dependent production rates. We +will focus on a particular form of β and η in which we parameterise the scale +for both the amplitude and frequency of the production heterogeneity +β(x) = β0 (1 + θ cos(nπx)) , +(19) +η(x) = 1 − β(x). +(20) +In this way, at each position a combined dimensionless activator/substrate pro- +duction of 1 is assumed. The parameter 0 ≤ β0 ≤ 1 describes the average pro- +portion of this production specific to the substrate and the parameter 0 ≤ θ ≤ 1 +describes the degree of redistribution of the relative production into n periods +of peaks and troughs on the domain Ω. +8 + +2.2.2 +Gierer-Meinhardt model +The parameterised heterogeneous Gierer-Meinhardt model is given as follows. +∂u +∂t = ∇2u + γ +�u2 +v − bu + a(x) +� +(21) +∂v +∂t = d∇2v + γ +� +u2 − v +� +, +(22) +This model is controlled by the heterogeneous production rate a(x) of the acti- +vator u. We will use a periodic heterogeneity of the form +a(x) = a0 (1 + θ cos(nπx)) , +where a0 ∈ R is the average production rate. +2.3 +Numerical methods +To generate numerical results we use the numerical continuation method pre- +sented by Uecker [15] to find solutions of Equations (11) and (12) and by starting +at u⋆ +0 we find base states for the heterogeneous problem. We begin with the +statement that Φ(u, x, ¯x; θ) = 0 (u must be a solution to Equations (11) and +(12)). By differentiating with respect to θ, +0 = ∂Φ +∂u +∂u +∂θ + ∂Φ +∂θ . +So long as ∂Φ +∂u is nonsingular then ∂θu can be estimated. As such, finding the +base states (and other steady states of the reaction diffusion system) can easily +be found using by starting at θ = 0 and incrementing up θ using a forward Euler +approach +uθ+∆θ = uθ + ∂uθ +∂θ ∆θ, +(23) += uθ − +�∂Φθ +∂u +�−1 ∂Φθ +∂θ ∆θ +(24) +where subscripts indicated the value of θ. The solution generated by Equation +(24) is then corrected to reduce error. This is done by setting uθ+∆θ as the +initial seed of a Newton solver for the problem Φ = 0. +We did not find it +necessary to use more advanced techniques in increasing θ. +It is possible to skip the approximate update Equation (24) and simply use +a nonlinear solver on Φ = 0 in the vicinity of uθ. This is, however, not a good +idea since it significantly increases computational time in the nonlinear solver +and can sometimes even result in the nonlinear solver finding instead a different +steady state solution (of which there may be many). In any case, we make use +of the pde2path package which implements this routine. +Finally, pde2path determines stability by looking at the sign of the largest +real component of the eigenvalues of the LHS of the PDE. +9 + +In the next section we explore numerical results which give insight into the +behaviour of Turing systems with heterogeneous production rates. We first look +at the characteristic behaviour of base states (Section 3.1). Noting that base +states often terminate for a sufficiently large value of θ with a fold bifurcation, +it is clear that for some problems if a heterogeneity is large enough a base +state is not defined using our definition. We therefore have a more thorough +investigation into what determines if a base state exists or not; what determines +how large θ can be before a fold bifurcation is reached (Section 3.2). Lastly, how +heterogeneous production can affect critical domain lengths required for Turing +patterning (Section 3.3). +3 +Numerical results and discussion +3.1 +Continuation of steady states +The first numerical results illustrate the behaviour of a Schnakenberg Turing sys- +tem described in Section 2.2.1 as the heterogeneous production term is increased +in amplitude by tracing the base state and patterned states through numerical +continuation of the amplitude parameter θ. We will first look at some example +cases to illustrate the types of branches that can be found. For all the following +results we will use the following parameters; d = 1/40, β0 = 0.8 and n = 1 +unless otherwise stated. Later we will show results for the Gierer-Meinhardt +model of Section 2.2.2 where we will use the default parameters d = 20, b = 1 +and a0 = 0.1 unless otherwise stated. When θ = 0 these parameters are known +to give a Turing instability in the base state. The parameter γ which encodes +for the domain length, amongst other things, will be varied between examples +to show how the base state behaves as it varies. In order to visualise the steady +state solution branches, we will plot the maximum value on the domain of only +the variable u against the parameter θ. This metric has been chosen arbitrarily +in order to distinguish between solutions. It is important to remember when +interpreting these bifurcation plots that the branches are only a projection of +the infinite dimensional function space onto a single scalar value for plotting +purposes. Importantly, this means that when branches intersect at non-smooth +intersections, it is not possible that this is a continuation. Instead, at the point +of intersection each branch corresponds to completely unrelated functions (other +than the fact that they share a common maximal value of u). +In many cases, we observe that there the continuation in θ can generate +base states indefinitely. We can also observe two main bifurcation events on +the branch containing the base state. The first of these is a fold at which the +base state and the stable patterned state emerge. The second is an example of +a fold, terminating the base state, but where the Turing patterned state never +bifurcates from base state (they are, instead, perfectly disconnected). By saying +‘patterned state’ we are implying that there is a branch corresponding to a non- +homogeneous but also stable steady state (indicated in blue in each figure). +Finally, we demonstrate some exotic behaviour of the steady states under some +10 + +conditions. +Base state with no limitation +In the most simple case, starting with u⋆ +0 and growing the heterogeneous term +by increasing θ in Section 2.2.1, no folds were found in increasing θ from 0 to 1. +It is important to note that this does not mean that the base states will extend +for an arbitrarily large θ. For the Schnakenberg system in Section 2.2.1, we find +that this often occurs for large γ and in Fig. 1 use the value of γ = 900. This +corresponds with a very large domain in relation to the expected wavelength of +any Turing patterns. Our value of γ corresponds to a value of ϵ ≈ 1.1 × 10−3 in +the paper by Krause et al. [6]. We find that in this case the base state exists +by numerical continuation and furthermore that it is approximately equal to +the steady state where diffusion is ignored as small which is trivial because it +is clear from Equations (11) and (12) that unless θ large on the order of γ, for +large γ, we simply have to leading order that u⋆ +θ solves F(u) + θG(u, x) = 0. +Base state fold connected to a patterned state +We observe different behaviour in the base state for non-large γ. If γ is small, but +not too small as to not observe Turing patterns in the homogeneous Schnaken- +berg system (due to the domain size being less than the necessary critical domain +length), then we observe a critical fold in the base state solution. In Fig. 2, we +use the value of γ = 1. When θ = 0, this corresponds to the case where there is +just one unstable wavenumber corrsponding to a Turing pattern with just a half +period on the full domain. In this case, the branch for a patterned state merges +with the branch of the base state, undergoing a fold bifurcation as seen in Fig. +2. This means that the base state becomes closer and closer to a patterned state +until both states are indistinguishable from each other at the fold bifurcation. +For heterogeneities with an amplitude θ beyond this fold (shown with a green +dot in Fig. 2), we are unable to objectively define a suitable base state and +therefore it becomes ambiguous as to whether or not a ‘Turing’ pattern is ob- +served in the solution of the reaction-diffusion problem. Indeed, whilst a steady +state solution to the reaction-diffusion equation is expected beyond the fold, +we do not know where this solution is by numerical continuation from θ = 0 +without significant work. That is, there are other missing branches here and +it remains unclear if any of these are reasonable candidates to be defined as a +‘base state’ at this stage and further work here is needed. In Fig. 2, you can see +the stable patterned state but also an unstable patterned state. For θ = 0 there +are at least two patterned states. You can see these states in the bifurcation +diagram as mirrored functions. Interestingly, if the heterogeneity is inverted in +sign (θ ∈ [−1, 0]), continuation shows a mirror image of the bifurcation diagram +in Fig. 2. +11 + +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +θ +0.8 +1.0 +1.2 +1.4 +1.6 +∥u∥∞ +Branch of solutions for γ = 900 +Initial Solution +Unstable Branch +Stable Branch +Figure 1: Schnakenberg system bifurcation diagram for growing heterogeneity +θ ∈ [0, 1]. Parameters used are characteristic of large domains relative to Turing +pattern wavelength (γ = 900) with also β0 = 0.8 and d = 1/40. When θ = 0, the +system solves a classical Turing system where the base state is homogeneous and +indicated with an ×. As the heterogeneity θ grows, so does the base state. A +number of examples of the spatial distribution of u along the (red) unstable base +state u⋆ +θ is displayed. In this case, the base state is allowed to grow continuously +without a fold. On the other hand, a (blue) stable Turing ‘patterned’ state +branch is also shown with some displayed distributions of u. This is found by +solving the full reaction-diffusion equation at θ = 0 and applying the numerical +continuation. +12 + +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +θ +0.75 +0.76 +0.77 +0.78 +0.79 +0.80 +0.81 +∥u∥∞ +Branch of solutions for γ = 1 +Initial Solution +Fold +Unstable Branch +Stable Branch +Stable branch +Figure 2: Schnakenberg system bifurcation diagram for growing heterogeneity +θ ∈ [0, 1]. Parameters used are characteristic of small domains relative to Turing +pattern wavelength (γ = 1) with also β0 = 0.8 and d = 1/40. When θ = 0, the +system solves a classical Turing system where the base state is homogeneous +and indicated with an ×. As the heterogeneity θ grows, so does the base state. +A number of examples of the spatial distribution of u along the (red) unstable +base state u⋆ +θ is displayed. In this case, the base state merges with the stable +patterned state at around θ = 0.09. The blue branches are stable patterned +states but only the solid branch can be obtained by continuing through the +fold. The dot-dash branch can be found through continuation of a fold in the +base state if decreasing θ from the θ = 0 base state. +13 + +Base state fold not connected to a patterned state +In intermediate values of γ, more curious behaviour is possible. This is in part +because these values permit multi-wavelength heterogeneous steady states. In +Fig. 3 we now display the bifurcation diagram for γ = 9 (analogous to a domain +length increase of three-fold on the example in Fig. 2). The key observation in +Fig. 3 is that whilst the base state branch also undergoes a fold bifurcation, the +solution branch with which it merges is an unstable heterogeneous steady state +(not a stable pattern). This illustrates that the base state branch can merge +with another branch which is not a branch of patterned states. In considering +Fig. 1 where the base state seemingly continues indefinitely without folds, it is +possible that a fold is present in a similar way to how it appears in Fig. 3 but at +sufficiently large values of θ. If this is the case, our observations might suggest +that as γ gets very large, so to does the values of θ where base state folding first +occurs. +Exotic behavior +While the previous examples show two branches originating at θ = 0 converg- +ing, this does not capture all possibilities. In a more bizarre scenario, we can +consider the case where γ = 3.61. As shown in Fig. 4, the system undergoes +many folds before merging with another solution branch which contains θ = 0. +Furthermore, there are stable steady states which are only present for a discrete +range of θ values. To demonstrate the behaviour and the way it closes itself, it +was necessary to continue in both the positive and negative θ direction from u⋆ +0. +3.2 +Base state existence +In order to have a discussion about Turing patterns, it is important for a base +state to exist. It is therefore critical to explore what determines θ+, the maxi- +mum size that θ can take before a critical point such as a fold is encountered. +To accomplish this we performed parameter scans on both the Schnakenberg +and Gierer-Meinhardt model from Sections 2.2.1 and 2.2.2. Our immediate ob- +servation from doing these scans is that fold bifurcations are very common. In +particular, we observed more folds when the spatially-dependent source term +G(u, x) varies explicitly in space with frequencies similar to that of unstable +eigenvectors in the dispersion relation. +In Fig. 5 we look at θ+ for the Schnakenberg model (a) and the Gierer- +Meinhardt model (b). In Fig. 5 (a) we plot θ+ as the scale parameter γ and the +parameter β0 in the Schnakenberg model are varied, whilst in (b) we instead +vary the parameter α0 in the Gierer-Meinhardt model. In both cases, we have +plotted, in red, the curves that relate to eigenvalues Λm = maxjℜ (λj(Am)) = 0 +for m = 1, 2, 3 (for curves left to right). We note that in our test problems we do +not have strictly imaginary eigenvalues so along these curves ¯J0 is singular and +we expect that θ+ is not finite. For each constant β0 (or α0) we see that Λm = 0 +at most twice because solving Λm = 0 requires solving a quadratic. Between the +14 + +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +0.12 +0.14 +θ +0.80 +0.85 +0.90 +0.95 +1.00 +∥u∥∞ +Branch of solutions for γ = 9 +Initial Solution +Fold +Unstable Branch +Stable Branch +Figure 3: Schnakenberg system bifurcation diagram for growing heterogeneity +θ ∈ [0, 1]. Parameters used are characteristic of intermediate domains relative +to Turing pattern wavelength (γ = 9) with also β0 = 0.8 and d = 1/40. When +θ = 0, the system solves a classical Turing system where the base state is +homogeneous and indicated with an ×. As the heterogeneity θ grows, so does +the base state. A number of examples of the spatial distribution of u along the +(red) unstable base state u⋆ +θ is displayed. In this case, the base state merges with +an unstable heterogeneous steady state at around θ = 0.12. The blue branch is +a stable patterned state but the dot-dash nature of this branch indicates that it +is not obtained by continuation past a fold from the steady state but instead by +solving the reaction-diffusion equation with θ = 0 until steady state and using +continuation from there. +15 + +−0.08 +−0.06 +−0.04 +−0.02 +0.00 +0.02 +0.04 +0.06 +0.08 +θ +0.70 +0.75 +0.80 +0.85 +0.90 +0.95 +1.00 +1.05 +1.10 +1.15 +∥u∥∞ +Branch of solutions for γ = 3.61 +Initial Solution +Fold +Unstable Branch +Stable Branch +Figure 4: Schnakenberg system bifurcation diagram for growing heterogeneity +θ ∈ [−1, 1]. Parameters used are characteristic of narrowly defined domains +relative to Turing pattern wavelength (γ = 3.61) with also β0 = 0.8 and d = +1/40. When θ = 0, the system solves a classical Turing system where the base +state is homogeneous and indicated with an ×. As the heterogeneity θ grows, so +does the base state. A number of examples of the spatial distribution of u along +the (red) unstable base state u⋆ +θ is displayed. Note that here the base state would +only be defined between approximately -0.05 and 0.05. By continuing through +each fold, we end up back at u⋆ +0. Interestingly, this closed loop contains three +different patterned branches (blue) but not a patterned branch on approximately +±(0.03, 0.04). It is expected that the patterned state obtained by solving the +reaction-diffusion equation in this regime is not connected here. +16 + +0.5 +1.0 +1.5 +2.0 +√γ +0.60 +0.65 +0.70 +0.75 +0.80 +0.85 +0.90 +0.95 +1.00 +β0 +a) +5 +10 +15 +20 +√γ +0.00 +0.05 +0.10 +0.15 +0.20 +0.25 +0.30 +0.35 +0.40 +a0 +b) +10−8 +10−7 +10−6 +10−5 +10−4 +10−3 +10−2 +10−1 +10−4 +10−3 +10−2 +10−1 +100 +Size of continuation before fold +Λm = 0 +Figure 5: Size of continuation before a fold θ+ for (a) the Schnakenberg model +and (b) the Gierer-Meinhardt model as γ is varied along with (a) β0 and (b) +a0, respectively. The size of the continuation is presented in color on the log +scale. All of these results are given for n = 1 in the heterogeneous term in the +respective models. Red curves are drawn on the figures to correspond with Λm = +maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right on both subfigures) +where λj(Am) are eigenvalues defined in Ssection 2.1. The background color +of white indicates that no fold was found for these parameter sets and θ was +allowed to grow to 1. +17 + +5 +10 +15 +20 +25 +√γ +2 +4 +6 +8 +10 +n +a) +0 +50 +100 +150 +200 +√γ +2 +4 +6 +8 +10 +n +b) +0.2 +0.4 +0.6 +0.8 +2 +4 +6 +8 +Size of continuation before fold +Λn = 0 +Λn = 0 +Inconsistency +Λ2n = 0 +Figure 6: Size of continuation before a fold θ+ for (a) the Schnakenberg model +and (b) the Gierer-Meinhardt model as γ and n is varied for each model. The +size of the continuation is presented in color. +Setting (a) β0 = 0.8 and (b) +a0 = 0.1 in each model respectively, Λn = maxjℜ (λj(An)) = 0 where λj(Am) +are eigenvalues defined in Section 2.1 has two solutions. +The solution with +smallest γ is shown on the blue line and the other is shown on the red line. The +background color of white indicates that no fold was found for these parameter +sets and θ was allowed to grow to 1. In (b) the green dashed line is an overlay +of the red line with half of the value of n for each γ. This curve surprisingly +traces a pattern of small θ+. In (a) a red × indicates a continuation that runs +into numerical difficulties. +18 + +two values, we find that Λm > 0 and thus the mth mode of the homogeneous +problem is unstable. On these curves, ¯J0 is singular. As previously established, +we expect on these curves that continuation is not possible. In the region shown +in white, we found no upper bound in θ+. This region also corresponds to the +subset of the parameter space where the associated homogeneous system is +devoid of Turing patterning. The red curves furthest to the left correspond to +m = 1 (corresponding to the onset of Turing instability in the eigenfunction +cos(πx) at θ = 0). Note that our growing heterogeneity is also of this form +(n = 1) cos(nπx) (see Equations (19) and (23)). We find that because of this +a fold is very quick to form in the numerical continuation near the red curve +corresponding to m = 1 but not near the onset of instability for the higher +modes. Small θ+ is shown by darker colors in the plot. +To investigate specifically if small θ+ is associated with m = 1 because n = 1 +we varied n in the Schnakenberg model from 1 to 10. In Fig. 6, for each n, +holding β0 = 0.8 (a) and α0 = 0.1 (b) we plot the size of the continuation +θ+ as γ is increased. We indicate the minimum value of γ (blue line) and the +maximum value of γ (red line) for which Λn = 0. That is, for n = 1 the blue +and red curves correspond to the first and second intersection of β0 = 0.8 (a) +and α0 = 0.1 (b) with the respective red curves in Fig. 5. We see for each n, +the size of θ+ is very small at both zeros of Λn. What is also surprising, if n is +larger than 1 if γ is smaller than that required to make the nth mode unstable +in the homogeneous problem, the continuation did not fold. That is, we may +have a Turing instability in the homogeneous problem because of an instability +in the m = 1 mode but if the heterogeneity has a higher spatial frequency, say +n = 2, the base state may not encounter a fold readily. As the scale parameter +γ is increased beyond the the red line, we find what appears to be noise in +θ+ but within this noise appears to be patterns. Looking specifically at the +Gierer-Meinhardt model in Fig. 6 (b) we see small θ+ near the value of the +maximum γ for which Λ2n = 0. We have indicated that this is case by tracing +the green dashed line over the expanse of small θ+. You can also see this effect +in Fig. 5 (a) for n = 1 by looking at the left branch of the m = 2 red curve +and seeing a noticeable dark shade. As γ increases, the magnitude that θ can +be continued before reaching a fold tends to increase, before not reaching a fold +at all. However, numerical instabilities are prevalent in this region, as shown +specifically by the red × in Fig. 6 (a), so the accuracy of these results remains +questionable. We shall look specifically at the continuation described by this +red × in the next section. The numerical results seem to become more accurate +as the spatial grid becomes finer, and the maximum step size in θ becomes +smaller. Due to the computational cost of producing parameter scan results, +the accuracy of the results is here limited. +Numerical Issues +The inconsistent numerical issue that occurs occasionally in our parameter +sweeping experiments in the previous section are investigated here. +In par- +ticular, we investigate the red × continuation in Fig. 6 (a). In this continuation +19 + +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +θ +0.8 +1.0 +1.2 +1.4 +1.6 +1.8 +∥u1∥∞ +a) +0.08 +0.10 +0.12 +0.14 +0.16 +θ +0.86 +0.88 +0.90 +0.92 +0.94 +0.96 +∥u1∥∞ +b) +Small step +Fold point +Other branch +Long Step +Solution branches +Figure 7: Plot of branches for the numerically inconsistent case highlighted in +Fig. 6 (a) with varying maximum step size. In purple, the base state branch +and continuation through the fold point (green dot) with very small step sizes +is shown. In yellow, a different branch is shown and the × symbols show the +updates in the continuation algorithm if the step size is too coarse. Plot (a) +shows the full bifurcation diagram whilst plot (b) displays a zoomed version of +the region enclosed in the red box to show detail near the fold point. +a maximum step size of 10−1 was used. This is a relatively large step size, but +since the pde2path package adaptively adjusts the step size as needed, it can +usually make out the finer details without much increase in computational cost. +However, in this case, the larger step size causes the solution to jump from one +branch to another. This can be seen in bifurcation Fig. 7, where for a small step +size, a fold is encountered early in the continuation, but for a large step size, +the continuation jumps to a different branch. Clearly the results in this region +are unreliable. It is not clear how small the step size must be made in order +to avoid this occurring. It does raise an interesting question though. In this +example, it is pretty clear that the (yellow) branch that the coarse numerical +algorithm found does not technically satisfy the numerical continuation criteria +for a base state. That being said, looking at the distributions on either side of +the singularity, it is possible that the yellow branch perhaps should be consid- +ered a base state. It remains unclear if such a suitable branch can be found in +for other cases. However, this case hints at the possibility that there may be a +better definition for a base state than the one presented in this manuscript (one +which can potentially always describe a unique state for all problems). +3.3 +Critical domain length +The extension of the Turing instability to spatially-dependent RD systems allows +us to distinguish between patterned states and the base states. Previously these +solution states were often indistinguishable. This meant that analysing certain +phenomena, such as the critical domain length, was very challenging or impossi- +ble. Now that the Turing instability has a spatially-dependent analogue, we can +20 + +study such phenomena. As a proof of concept, we will study how the critical +domain length changes as the size of the heterogeneity in a spatially-dependent +RD system increases. The critical domain length has important physical impli- +cations, especially in developmental scenarios. In a scenario where the domain +is slowly growing, Turing patterns will arise only if the size of the domain is +above the critical domain length. Therefore, assessing the impact of a spatially- +dependent term on the critical domain length could have key implications for +these developmental scenarios. We will attempt to investigate the change in +the critical domain length with respect to the size of the heterogeneity for two +different reaction terms. +The critical domain length is encoded in a critical γ value which we will +call γc. Denote γc,0 ∈ R+ as the critical γ value for the classical RD system, +and γc,θ ∈ R+ as the critical γ value for the heterogeneous RD system with +parameter θ. Further, define Lc,0 := √γc,0, Lc,θ := √γc,θ as the respective crit- +ical domain lengths. Here we are accepting Lc = √γc to be a non-dimensional +equivalent of the critical domain length. +The value of γc,θ is defined by largest γ such that the base state of Equations +(7) and (8) is stable for all γ < γc,θ, but exhibits Turing instabilities for some +γ > γc,θ. It is infeasible to check all γ values less than some candidate value for +γc,θ. Instead, we can rely on the fact that when γ = γc,0, Λm = 0 which can +be calculated exactly for both the Schnakenberg model and Gierer-Meinhardt +model. +Instead of parameterising the base state branch with the size of the hetero- +geneity θ only, we will also parameterise with respect to γ. In doing so, we +are assuming that a path independence result holds. That is, the base state +solution for some γ0 > 0 can be found by first finding the base state solution for +another γ1 > 0, and then continuing from that base state solution with respect +to γ to find the solution at γ1. Initially we will use γ = γc,0 to perform the +continuation, as this is known exactly and we will assume that this is close to +γc,θ. After finding a base state solution with the initial γ value, we perform nu- +merical continuation with respect to γ, and continue to increasing or decreasing +γ until finding γc,θ for a given θ. We reach the critical value γc,θ when the base +state (with respect to γ but constant θ) undergoes a change of stability. If the +base state found for γ = γc,0 is stable, then we will increase γ in the second +stage continuation. Likewise, we will decrease γ if the base state is unstable. +Determining whether a steady state solution is stable can be done using inbuilt +methods in pde2path [15]. +We are relying on using γ = γc,0 as an initial condition for the continuation. +However, based on recent analysis on heterogeneous RD systems, there are +points where the system with θ = 0 is outside of the Turing region, so we still +expect to see Turing instabilities for a sufficiently large γ [6]. If the homogeneous +system defined by θ = 0 is outside of the Turing region, it is unclear what the +initial γ value should be. A further investigation into resolving a method for +finding the critical domain length in this case should be considered. +Fig. 8 shows the critical domain length Lc for the Schnakenberg system for +a range of θ and β0 values. The length Lc appears to be decreasing with respect +21 + +0.7 +0.8 +0.9 +1.0 +β0 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +1.1 +Lc +θ = −1/2 +θ = −1/3 +θ = −1/6 +θ = 0 +θ = 1/6 +θ = 1/3 +θ = 1/2 +0.7 +0.8 +0.9 +1.0 +β0 +−20% +−10% +0% +10% +20% +Lc % change +Critical domain length +Figure 8: Critical domain lengths Lc,θ of the Schnakenberg system described in +Section 2.2.1. The critical domain length is plotted for a range of heterogeneity +sizes θ as a function of the parameter β0. +0.0 +0.1 +0.2 +0.3 +a0 +3.5 +4.0 +4.5 +5.0 +5.5 +6.0 +6.5 +Lc +θ = −1/2 +θ = −1/3 +θ = −1/6 +θ = 0 +θ = 1/6 +θ = 1/3 +θ = 1/2 +0.0 +0.1 +0.2 +0.3 +a0 +−15.0% +−10.0% +−5.0% +0.0% +5.0% +10.0% +15.0% +20.0% +Lc % change +Critical domain length +Figure 9: Critical domain lengths Lc,θ of the Gierer-Meinhardt system described +in Section 2.2.2. The critical domain length is plotted for a range of heterogene- +ity sizes θ as a function of the parameter a0. +22 + +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +x +−0.2 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +β, η +a) +β0 = 0.8 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +x +b) +β0 = 0.9 +Production Rates and local Turing regions +β +η +Turing +Region +Figure 10: Production rates for the first chemical, u, and the second chemical +v, for the Schnakenberg model of Section 2.2.1. +Plots (a) and (b) describe +the model with β0 = 0.8 and β0 = 0.9 respectively. Each figure also shows the +regions where the system is locally within the classical Turing pattern-generating +parameter space. These plots are made for θ = 1/3, meaning that we found a +critical domain length for the system shown in (b), but not in (a). In (a), gap +between the regions that are driving the Turing instability in the whole domain +are further apart and it is possible that these are effectively decoupled. In this +case, we would expect to find a critical domain length but significantly larger +(where Turing patterns can be associated with the sub-domains which locally +drive Turing patterns). +to β0 and increasing with respect to θ. On the other hand, Fig. 9 shows that +the critical domain lengths for the Gierer-Meinhardt system appears to have +the reverse dependence on the parameter a0. +For a given production rate, if the θ = 0 is within the Turing region, then +we expect to have a critical domain length for every other θ value. +This is +because the cosine heterogeneity will cause at least one interval of the domain +to be within the Turing region locally. Thus, for sufficiently large γ, we expect +to see Turing patterns [6]. However, our method for finding the critical domain +length in many of these cases fails. Most notably, the critical domain length +could not be found for any β0 value when θ = 1/2, as seen in Fig. 8. This is +potentially because there is a decoupling effect between two intervals which are +locally within the Turing region. Fig. 10 shows the regions where the systems +with θ = 1/3 and β0 = 0.8, 0.9 are locally within the Turing region. As seen +in Fig. 8, a critical domain length could be found for β0 = 0.9, but not for +β0 = 0.8. Although the Turing regions are larger in the case where β0 = 0.8, +the region between the two Turing regions is also larger. This gap between the +Turing regions could have a decoupling effect where, if the two regions are close +23 + +enough together, they can act as one region for the purposes of forming a Turing +instability. That is, there is enough bleed through from one region to the other +to support a Turing pattern, despite having a region where no Turing pattern +can be supported in between. So in this case there would be a critical θ value +after which γ must be significantly larger before observing Turing instabilities +which are local to the respective Turing regions. +4 +Conclusions +Despite being widely applicable to various problems in science, Turing insta- +bilities in spatially-dependent reaction-diffusion systems have yielded very little +attention in the literature. +One of the roadblocks to understanding the be- +haviour of these systems is the lack of definition for Turing instabilities when +the problem depends on the spatial coordinate. The classical definition relies on +the existence of a uniform steady state solution, however no such steady state +exists for spatially-dependent problems in general. In reformulating the defini- +tion, the problem arises of distinguishing between patterned states and the base +state. The base state in the classical case is the uniform steady state. Since +the steady state solutions of most spatially-dependent reaction-diffusion system +are non-uniform, it is unclear which states we should label as ‘patterned’, and +which are labelled as a ‘base state’. In order to link the spatially-dependent +case with the classical case, we utilise tools from continuation to gradually in- +crease the size of the heterogeneity. That is, the spatially-dependent term (or +heterogeneity), is parameterised such that the heterogeneity vanishes initially, +and grows to full amplitude as the introduced parameter increases. Once at +full amplitude, the base case solution to the reaction-diffusion equation is the +solution found through continuation, with a full amplitude heterogeneity. This +grounds the spatially-dependent base case to the classical base case, and allows +us to distinguish between patterned and non-patterned states. By defining the +base case solution through continuation, this also provides a method for finding +the base solution using numerical continuation. +While we have extended the definition of the Turing base state, this does +not directly extend the definition of the Turing instability. +Traditionally, a +Turing instability requires the base state to be stable to constant perturbations, +and unstable overall. The stability to constant perturbations condition is not +relevant with a spatially-dependent base state. As such, the extension of the +first Turing condition is not trivial even after defining the base state. So we +discussed a few possibilities about how this condition could be extended, and +the benefits of each possibility. Much more research can be done to analyse the +properties of each of these definitions. +After defining the base state for heterogeneous Turing systems, it remains +to know whether such base states exist. We provided a variety of case studies +showing that the existence of heterogeneous base states was not guaranteed. +Further, we could not determine, a priori, whether base states exist for a fi- +nite size heterogeneity. To investigate this further, two parameter scans were +24 + +performed. The first varied the average production rate of the first chemical, +and the length of the domain. The second varied the form of the heterogeneity +and the length of the domain. Both parameter scans were tested with both the +Schnakenberg and the Gierer-Meinhardt reactions. For each set of parameters +chosen, we measured how far the branch of solutions could be continued before +reaching a fold bifurcation. This measures how large the heterogeneity can be +before the Turing base state ceases to exist. The results of the parameter scans +results reveal strong correlations with existing, fundamental theory from the +dispersion relation. Further research into a clear link between these theories is +needed. +For small domain lengths, it becomes even more difficult to distinguish be- +tween patterned and non-patterned states. This is because the wavelength of +some patterns are often similar to the length scale of the heterogeneity. The +new definition allows for this distinction to be made, so systems with a small +domain length can be analysed. This new distinction allowed us to analyse how +the critical domain length changes for heterogeneous RD systems. We numer- +ically determined the critical domain length for a range of heterogeneity sizes, +and a range of average production rates. This serves as a proof of concept of +how the new definition could be applied to a new problem. This was done for +the Schnakenberg system and a Gierer-Meinhardt system. We were able to find +the critical domain length for a range of heterogeneity sizes and average pro- +duction rates. In some cases, however, the method we used to find the critical +domain length failed. It is possible that there are discontinuities in the critical +domain length caused by a decoupling in the domain. The method should be +further developed to account for this, in an attempt to resolve the issues with +the method used. +References +[1] J. F. G. Auchmuty and G. Nicolis, Bifurcation analysis of nonlinear +reaction-diffusion equations—I. Evolution equations and the steady state +solutions, Bulletin of Mathematical Biology, 37 (1975), pp. 323–365, https: +//doi.org/10.1007/bf02459519. +[2] H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differen- +tial Equations, Springer New York, 2011, https://doi.org/10.1007/ +978-0-387-70914-7, https://doi.org/10.1007/978-0-387-70914-7. +[3] S. N. Chow and J. K. Hale, Methods of bifurcation theory, Grundlehren +der mathematischen Wissenschaften, Springer, New York, NY, Nov. 2011. +[4] R. A. V. Gorder, Pattern formation from spatially heterogeneous re- +action–diffusion systems, Philosophical Transactions of the Royal Soci- +ety A: Mathematical, Physical and Engineering Sciences, 379 (2021), +https://doi.org/10.1098/rsta.2021.0001. +25 + +[5] M. Koz´ak, E. A. Gaffney, and V. Klika, Pattern formation in +reaction-diffusion systems with piecewise kinetic modulation: An exam- +ple study of heterogeneous kinetics, Phys. Rev. E, 100 (2019), p. 042220, +https://doi.org/10.1103/PhysRevE.100.042220. +[6] A. L. Krause, V. Klika, T. E. Woolley, and E. A. Gaffney, From +one pattern into another: analysis of Turing patterns in heterogeneous +domains via WKBJ, Journal of The Royal Society Interface, 17 (2020), +p. 20190621, https://doi.org/10.1098/rsif.2019.0621. +[7] B. A. Lawson and M. B. Flegg, A mathematical model for the induction +of the mammalian ureteric bud, Journal of Theoretical Biology, 394 (2016), +pp. 43–56, https://doi.org/https://doi.org/10.1016/j.jtbi.2015. +12.025. +[8] V. M´endez, S. Fedotov, and W. Horsthemke, Reaction-transport +systems: Mesoscopic foundations, fronts, and spatial instabilities, 2010. +[9] K. Page, P. K. Maini, and N. A. Monk, Pattern formation in spatially +heterogeneous Turing reaction–diffusion models, Physica D: Nonlinear Phe- +nomena, 181 (2003), pp. 80–101, https://doi.org/https://doi.org/10. +1016/S0167-2789(03)00068-X. +[10] S. T. A. Pickett and M. L. Cadenasso, Landscape ecology: Spatial het- +erogeneity in ecological systems, Science, 269 (1995), pp. 331–334, https: +//doi.org/10.1126/science.269.5222.331, https://arxiv.org/abs/ +https://www.science.org/doi/pdf/10.1126/science.269.5222.331. +[11] R. Sheth, +L. Marcon, +M. F. Bastida, +M. Junco, +L. Quin- +tana, +R. +Dahn, +M. +Kmita, +J. +Sharpe, +and +M. +A. +Ros, +Hox genes regulate digit patterning by controlling the wavelength of a +Turing-type mechanism, Science, 338 (2012), pp. 1476–1480, https:// +doi.org/10.1126/science.1226804, +https://arxiv.org/abs/https: +//www.science.org/doi/pdf/10.1126/science.1226804. +[12] G.-Q. Sun, M. Jusup, Z. Jin, Y. Wang, and Z. Wang, Pattern +transitions in spatial epidemics: +Mechanisms and emergent properties, +Physics of Life Reviews, 19 (2016), pp. 43–73, https://doi.org/https: +//doi.org/10.1016/j.plrev.2016.08.002. +[13] U. Timm and A. Okubo, Diffusion-driven instability in a predator-prey +system with time-varying diffusivities, Journal of Mathematical Biology, 30 +(1992), pp. 307–320, https://doi.org/10.1007/bf00176153. +[14] A. M. Turing, The chemical basis of morphogenesis, Philosophical +Transactions of the Royal Society of London. Series B, Biological Sciences, +237 +(1952), +pp. +37–72, +https://doi.org/10.1098/rstb.1952.0012, +https://arxiv.org/abs/https://royalsocietypublishing.org/doi/ +pdf/10.1098/rstb.1952.0012. +26 + +[15] H. Uecker, Numerical Continuation and Bifurcation in Nonlinear PDEs, +Society for Industrial and Applied Mathematics, Jan. 2021, https://doi. +org/10.1137/1.9781611976618. +27 + diff --git a/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..632049379c2fd028c1c3062c52d8db053c1aedf7 --- /dev/null +++ b/n9E_T4oBgHgl3EQf8Byb/content/tmp_files/load_file.txt @@ -0,0 +1,843 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf,len=842 +page_content='Turing pattern or system heterogeneity?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A numerical continuation approach to assessing the role of Turing instabilities in heterogeneous reaction-diffusion systems Jacob C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Vandenberg∗ Mark B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Flegg† January 23, 2023 Abstract Turing patterns in reaction-diffusion (RD) systems have classically been studied only in RD systems which do not explicitly depend on in- dependent variables such as space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In practise, many systems for which Turing patterning is important are not homogeneous with ideal boundary conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In heterogeneous systems with stable steady states, the steady states are also necessarily heterogeneous which is problematic for applying the classical analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Whilst there has been some work done to extend Turing analysis to some heterogeneous systems, for many systems it is still difficult to determine if a stable patterned state is driven purely by sys- tem heterogeneity or if a Turing instability is playing a role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this work, we try to define a framework which uses numerical continuation to map heterogeneous RD systems onto a sensible nearby homogeneous system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This framework may be used for discussing the role of Turing instabili- ties in establishing patterns in heterogeneous RD systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We study the Schnakenberg and Gierer-Meinhardt models with spatially heterogeneous production as test problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is shown that for sufficiently large system heterogeneity (large amplitude spatial variations in morphogen produc- tion) it is possible that Turing-patterned and base states become coinci- dent and therefore impossible to distinguish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Other exotic behaviour is also shown to be possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We also study a novel scenario in which mor- phogen is produced locally at levels that could support Turing patterning but on intervals/patches which are on the scale of classical critical do- main lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Without classical domain boundaries, Turing patterns are allowed to bleed through;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' an effect noted by other authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, this phenomena effectively changes the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Indeed, we even note that this phenomena may also effectively couple local patches together and drive instability in this way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' ∗School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' †School of Mathematical Sciences, Monash University, Clayton, Victoria 3800, Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08373v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='AP] 20 Jan 2023 1 Introduction The reaction-diffusion (RD) equation is a nonlinear partial differential equation which exhibits extraordinary diverse behavior observed particularly in the life sciences [13, 12, 10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It models the concentration of different species in time as they interact whilst diffusing in space relative to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The species of the system could refer to a chemical species, biological species or ecological species, amongst other possibilities [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Under certain conditions, solutions to the RD equation can have an insta- bility which is “driven by diffusion”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is called a Turing instability, which is usually defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Turing instabilities occur when an RD system has a spatially-uniform steady state which is unstable in the presence of diffusion, but stable in the absence of diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Alan Turing’s seminal paper analyses Turing instabilities as a mechanism for explaining the emergence of spatial heterogene- ity in diffuse biological chemical systems [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The reason Turing instabilities can explain this onset of heterogeneity is because they typically produce Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Turing patterns are stable solutions to the RD equation which have large spatial oscillations, and are stationary in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Usually diffusion has the effect of “flattening” the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, however, diffusion is what causes the system to deviate away from uniformity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Often, RD models are spatially homogeneous in the sense that the RD PDE does not explicitly contain the spatial variable x (or t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Typically, RD models which exhibit Turing patterning are studied as homogeneous systems to sim- plify the analysis of the PDE (finding steady states, performing linear stability analysis, demonstrating the potential for patterning etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' At the same time, most real world applications almost certainly contain spatial variation in model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Consider, for example, the patterning and development of digits, kidneys and lungs where homogeneous models are analysed for the presence of Turing instabilities despite there being obvious spatial heterogeneity in mor- phogen production rates [7, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Understanding Turing patterning in the presence of spatially heterogeneous RD PDEs is not well understood and surprisingly has received very little at- tention in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Perhaps, one of the reasons for this is that Turing analysis of spatially heterogeneous RD PDEs is challenging as it is not even necessarily apparent even how Turing instabilities should be defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To begin, the unstable uniform steady state required for defining the Turing instability does not exist by definition for spatially heterogeneous RD PDEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The analysis by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' presents a general stability theory for a hetero- geneous RD PDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This paper is however limited to cases where heterogeneity varies slowly almost everywhere relative to the domain size [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In the paper, Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' define a ‘base state’ solution which replaces the notion of the uni- form steady state which has been ‘flattened’ by diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The base state, which must be a stationary solution to the PDE, has certain properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Importantly, the base state does not have spatial oscillations with periods much smaller than the inhomogeneity in the PDE (it is nice and ‘diffused’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Aside from this defi- nition being vague, it is not clear that it should be the case if the PDE contains 2 heterogeneities which vary on the same spatial scale as the Turing patterns for the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is because it is not easy to distinguish between patterned and base states if oscillations in the patterned state are on the same spatial scale as the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We shall also be adopting the term ‘base state’ but attempting to find a more general approach to finding it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Another method which has been widely used in the literature is to limit the scope of the study to more specific examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This includes choosing specific reaction terms such that an exact solution can be computed [9, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' At this point, a stability analysis similar to the classical analysis can be performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Using a linear reaction term is common [9, 5], but nonlinear reaction terms can also be considered [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Truncated Galerkin expansions of the solution have been used to study the stability of heterogeneous problems [4, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' These too use specific examples to find base states analytically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' No insight is given as to why the solutions that were found should be analogous to the uniform base state in the homogeneous case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this manuscript, our aim is to investigate a method which may be used to find base states for heterogeneous reaction-diffusion PDEs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The stability of these base states may be used to define Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We propose a method for describing base states and apply this method to the canonical Schnaken- berg (substrate depletion) system as well as the Gierer-Meinhardt (activator- inhibitor) system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In both of these systems we allow the production of species to vary in space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We focus on two main curiosities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The first deals with critical phenomena which place limitations on when a base state may be defined and the second deals with the onset of critical domain lengths for Turing instabilities in the presence of heterogeneous production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2 Methods The classical spatially-homogeneous dimensionless reaction-diffusion system is ∂u ∂t = D∇2u + γF(u), on Ω, (1) ∇u · n = 0, on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (2) Here u is a vector containing the concentration of model species/chemicals, D is a diagonal matrix of diffusion constants (with D11 = 1 providing a charac- teristic timescale for nondimensionalisation), and F is a nonlinear vector-valued function describing the possible sources and sinks of, and reactions between, the species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The domain Ω (which has an outward normal vector n) has been scaled through non-dimensionalisation so that the spatial scale of the system relative to that of diffusion is described by the magnitude of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A Turing analysis of this system begins by finding the uniform steady state solution u⋆ such that F(u⋆) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Indeed this uniform state is a solution to the model because derivatives of u⋆ (a constant) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Subsequently, a Turing pattern is formed when the solution u⋆, which is stable if D = 0, is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3 The uniform solution to the model u⋆ will be called the base state and in hetero- geneous problems loses its uniformity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is the natural, diffusion-flattened, state of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We can extend the RD model to account for explicit spatial variation ∂u ∂t = div(D(x)∇u) + γF(u, x), on Ω, (3) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (4) If we were to proceed as before, we can take u⋆(x) which satisfies F(u⋆(x), x) = 0 for all x ∈ Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The diffusion term div(D(x)∇u⋆(x)) is not zero in general, which would mean u⋆(x) is not a steady state solution of Equation (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Thus, it does not make sense to analyse its stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So in order to extend the definition of a Turing instability, we need to find a different base state u⋆(x) which satisfies the steady state problem for Equations (3) and (4) but also should not be called a Turing pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Whilst a ‘pattern’ is often defined as any stable stationary heterogeneous solution, we reserve the definition of pattern in this manuscript to describe any stationary heterogeneous state separate to the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As it stands, there is no conventional way of finding or defining more gen- erally what this base state is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The only thing that can be said about the base state u⋆(x) is that it should be somehow sensibly analogous to the uniform base state described for the homogeneous system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We will narrow the scope of our efforts to investigate this system to the case where heterogeneity is in the reaction term only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Specifically, we look at systems with heterogeneous production rates of each species as we believe that this system is ubiquitous in biological application where morphogen is deferentially expressed in space but reactions between morphogens are autonomous as one might expect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Thus, the form of the RD equation that we will be analysing is as follows and splits F up into autonomous, homogeneous ˆF and heterogeneous G components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' How this partition should be done appropriately and uniquely we will discuss here, outlining the approach that we have taken, but we will justify this approach in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' ∂u ∂t = D∇2u + γ � ˆF(u) + G(u, x) � , on Ω, (5) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (6) To analyse this system, we will find it useful to ‘grow’ the heterogeneous com- ponents by means of a parameter θ by defining the parameterised problem ∂u ∂t = D∇2u + γ � ˆF(u) + θG(u, x) � , on Ω, (7) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (8) Importantly, the parameter θ in these models describe the amplitude of the heterogeneity in the system and when θ → 0 a classical system is recovered and when θ → 1 the full heterogeneous problem is recovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Importantly, as θ may 4 be thought of as the amplitude of the heterogeneity and easily absorbed into G, it is possible to also think of θ growing beyond 1 and simply forming part of a growing G in Equations (5) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Whilst there is freedom in the choice of the partition of F in Equation (3) into G and ˆF in Equation (5), we find it appropriate to uniquely define G and ˆF for a given F in the following way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' ˆF = 1 |Ω| � Ω F(u, x) ∂x, (9) G = F − ˆF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (10) This is a convenient choice when the reaction term can be decomposed into a spatially-independent coupling term and a spatially-dependent source term, resulting in the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' F(u, x) = ˆF(u) + G(x), where the average value of G is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Furthermore, by using this decomposition for F, we ensure that for each θ the parameterised system (Equation (7)) adheres to the same decomposition rules whilst at the same time capturing autonomous reactions in F within ˆF and often it is these terms which are the characteristi- cally important ingredients in the Turing behaviour of the system (noting that F → ˆF as θ → 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 Base states In this section, we attempt to redefine the base state of a heterogeneous reaction- diffusion system as a parameterised continuation of a nearby homogeneous sys- tem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A necessary condition on the base state of a reaction-diffusion system (Equations (7) and (8)) is that it must be a stationary solution, against which stability can be later checked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The base state of Equations (7) and (8) shall be labelled as u⋆ θ(x) (and sometimes as u⋆ θ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' θ) to highlight dependence on the parameter θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We have that u⋆ θ(x) is a solution to D∇2u + γ � ˆF(u) + θG(u, x) � = 0 on Ω, (11) ∇u · n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (12) Since the base state should become the uniform steady state as θ → 0, we have that u⋆ 0 ∈ RNs (where Ns is the number of species in the model) is constant in x and ˆF(u⋆ 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It makes sense to represent Equations (11) and (12) as the single equation Φ(u, x, ¯x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' θ) = � D∇2u(x) + γ � ˆF(u(x)) + θG(u(x), x) � , ∇u(¯x) · n �⊤ = 0, (13) 5 where x ∈ Ω and ¯x ∈ ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In order to label a solution to 13 as a base solution, we will further require that it varies continuously with respect to θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this way, the base states of the system are tied, via continuation of the parameter θ, to the base state u⋆ 0 (uniform steady state) of an associated homogeneous system (as θ → 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To ensure the existence of u⋆ θ for some θ ̸= 0, we can find some η > 0 and u⋆ θ : (−η, η) → C2(Ω, RNs) such that u⋆ θ uniquely solves 13 and u⋆ 0 solves ˆF(u⋆ 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The value η provides a region where any −η ≤ θ ≤ η is guaranteed to have a base state solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Outside of (−η, η), the amplitude of the heterogeneity may become so large that it is not possible to draw a continuation from u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We define the Jacobian ¯Jθ(u, x) = ∂Φ ∂u = (Jθ(u, x), n · ∇)⊤ (14) = � D∇2 + γ � jˆF(u) + θjG(u, x) � , n · ∇ �⊤ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (15) Here jˆF(u) and jG(u, x) are the Jacobians of ˆF and G respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For continuity and uniqueness of u⋆ θ in θ at θ = 0 by the Implicit Function Theorem (IFT) [3], we require ¯J0 to reversible at u⋆ 0 and therefore, we require that jˆF(u⋆ 0) is nonsingular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Singularity in ¯Jθ allows for the possibility that θ may become too large in magnitude for there to exist a defined base state u⋆ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It’s unclear in general how large a heterogeneity (θ) can get before the base state either stops existing or is not unique, or even if the base state is even bound in this way at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Defining the base state outside of some potential maximum range θ− < θ < θ+, is problematic and in our framework not (yet) possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The values of θ− and θ+ coincide with folds in the solution to Equations (11) and (12) characterised by singularities in ¯Jθ− and ¯Jθ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Definition 1 (Spatially-dependent Turing base state).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For each u0 ∈ RNs such that ˆF(u0) = 0 we define the associated spatially-dependent Turing base state (or just base state) for Equation (5) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' If there exists u⋆ θ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' θ) ∈ C1(Ω × (0, 1], RNs) which is a steady state solution to Equation (7) for all θ ∈ (0, 1] and where u⋆ 0(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 0) = u0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Then u⋆ 1(x) is a Turing base state to the spatially- dependent RD system (Equations (5) and (6)) associated with the uniform base state u⋆ 0(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Defining the base state in this way is a natural extension of the classical homogeneous case, since the heterogeneous base state should not deviate too far from the uniform one in the situation where the amplitude of the heterogeneity in the system is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In other words, if heterogeneity in the system is small, we would expect that the base state should be almost ‘flat’ from diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As an important note, we have chosen to define ˆF and G using Equations (9) and (10), in doing so we ensure that all autonomous terms in F (for example reaction kinetics between species which drive Turing instabilities) are encapsu- lated in ˆF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Clearly, it is possible to simply define G = F and ˆF = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' With 6 this choice, we immediately see that jˆF(u⋆ 0) is singular and continuation to the heterogeneous base state is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In the case where ˆF ̸= 0, we have ¯J0(u⋆ 0, x) = ∂Φ ∂u ���� u=u⋆ 0,θ=0 = � D∇2 + γJˆF(u⋆ 0), n · ∇ �⊤ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We apply this to cj ˆwm, where cj ∈ RNs is the jth eigenvector of Am = −Dk2 m + γJˆF(u⋆ 0) and ˆwm is the eigenfunction solving ∇2 ˆwm = −k2 m ˆwm on Ω with ∇ ˆwm· n = 0 on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This gives us the following, � D∇2cj ˆwm + γJˆFcj ˆwm, n · ∇cj ˆwm �⊤ = (Amcj ˆwm, 0)⊤ = (λj(Am), 0)⊤ cj ˆwm, where λj(Am) is the eigenvalue associated with the eigenvector cj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This eigen- value determines the stability of the eigenvector cj ˆwm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So if any eigenvector cj has a corresponding λj(Am) = 0, the operator ¯J0 will not be invertible and the conditions for the IFT would not be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The continuation of base states from θ = 0 cannot proceed unless G(u⋆ 0, x) is orthogonal to every eigenvector in the null space of the adjoint operator ∂Φ ∂u ∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, � Ω G(u⋆ 0, x)⊤v dx = 0, ∀v ∈ null � D∇2 + γJ⊤ ˆF � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is a result of Fredholm’s alternative [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This solvability condition is not guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So for any chosen parameterisation, there may still be cases where continuation is impossible about θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We have chosen to multiply the heterogeneity G by a parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Of course, this parameterisation of heterogeneity (θ = 1) from the associated homogeneous system (θ = 0) is not unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Equations (7) and (8) we increase the size of the heterogeneity linearly with the parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A more general parameterisation could be ∂u ∂t = D∇2u + γˆF(u) + γG(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' θ), (16) provided that ˆF(u) + G(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1) ≡ F(u, x), and G(u, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 0) ≡ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The IFT only provides information about the existence and uniqueness of the base state solution branch locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The existence and uniqueness of the base state solution at θ = 1 is unknown a priori.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In particular, it is unknown whether changing the parameterisation of G will lead to a change in the base state or the existence of the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For this, a global homotopy result would be required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The analysis by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' gives general stability theory for a large perturbation in the limit as γ approaches ∞ [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, little attention is given on redefining the base state for the Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The analysis assumes that 7 a steady state solution to the full RD equation (Equations (3) and (4)) exists, and that this solution has certain properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The first property is that the solution does not have spatial oscillations on the scale O(1/ϵ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is an a posteriori assumption, since no method is provided for determining whether the base state u⋆(x) has O(1/ϵ) oscillations without first finding u⋆(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Since the heterogeneous RD equation is nonlinear in general, finding such a solution is non-trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Finally, it is assumed that F satisfies the boundary conditions ∂u ∂x = 0 at x = 0, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 Case studies In our numerical investigation, we focus attention on two popular models;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' the Schnakenberg model and the Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In their standard ho- mogeneous forms, the Schnakenberg model is widely studied as a substrate de- pletion Turing system whilst the Gierer-Meinhardt model is a typical activator- inhibitor Turing system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In both of these cases we consider only one-dimensional domains Ω ∈ (0, 1) on which to solve the PDEs and on the boundaries each of the species have no-flux conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 Schnakenberg model The parameterised heterogeneous Schnakenberg model we will be using is as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' ∂u ∂t = ∇2u + γ � −uv2 + β(x) � , (17) ∂v ∂t = d∇2v + γ � uv2 − v + η(x) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (18) Here d represents the relative diffusion of the activator v compared to that of the substrate u whilst β and η are spatially dependent production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We will focus on a particular form of β and η in which we parameterise the scale for both the amplitude and frequency of the production heterogeneity β(x) = β0 (1 + θ cos(nπx)) , (19) η(x) = 1 − β(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' (20) In this way, at each position a combined dimensionless activator/substrate pro- duction of 1 is assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The parameter 0 ≤ β0 ≤ 1 describes the average pro- portion of this production specific to the substrate and the parameter 0 ≤ θ ≤ 1 describes the degree of redistribution of the relative production into n periods of peaks and troughs on the domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 Gierer-Meinhardt model The parameterised heterogeneous Gierer-Meinhardt model is given as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' ∂u ∂t = ∇2u + γ �u2 v − bu + a(x) � (21) ∂v ∂t = d∇2v + γ � u2 − v � , (22) This model is controlled by the heterogeneous production rate a(x) of the acti- vator u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We will use a periodic heterogeneity of the form a(x) = a0 (1 + θ cos(nπx)) , where a0 ∈ R is the average production rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='3 Numerical methods To generate numerical results we use the numerical continuation method pre- sented by Uecker [15] to find solutions of Equations (11) and (12) and by starting at u⋆ 0 we find base states for the heterogeneous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We begin with the statement that Φ(u, x, ¯x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' θ) = 0 (u must be a solution to Equations (11) and (12)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' By differentiating with respect to θ, 0 = ∂Φ ∂u ∂u ∂θ + ∂Φ ∂θ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So long as ∂Φ ∂u is nonsingular then ∂θu can be estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As such, finding the base states (and other steady states of the reaction diffusion system) can easily be found using by starting at θ = 0 and incrementing up θ using a forward Euler approach uθ+∆θ = uθ + ∂uθ ∂θ ∆θ, (23) = uθ − �∂Φθ ∂u �−1 ∂Φθ ∂θ ∆θ (24) where subscripts indicated the value of θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The solution generated by Equation (24) is then corrected to reduce error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is done by setting uθ+∆θ as the initial seed of a Newton solver for the problem Φ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We did not find it necessary to use more advanced techniques in increasing θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is possible to skip the approximate update Equation (24) and simply use a nonlinear solver on Φ = 0 in the vicinity of uθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is, however, not a good idea since it significantly increases computational time in the nonlinear solver and can sometimes even result in the nonlinear solver finding instead a different steady state solution (of which there may be many).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In any case, we make use of the pde2path package which implements this routine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Finally, pde2path determines stability by looking at the sign of the largest real component of the eigenvalues of the LHS of the PDE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 9 In the next section we explore numerical results which give insight into the behaviour of Turing systems with heterogeneous production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We first look at the characteristic behaviour of base states (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Noting that base states often terminate for a sufficiently large value of θ with a fold bifurcation, it is clear that for some problems if a heterogeneity is large enough a base state is not defined using our definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We therefore have a more thorough investigation into what determines if a base state exists or not;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' what determines how large θ can be before a fold bifurcation is reached (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Lastly, how heterogeneous production can affect critical domain lengths required for Turing patterning (Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3 Numerical results and discussion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 Continuation of steady states The first numerical results illustrate the behaviour of a Schnakenberg Turing sys- tem described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 as the heterogeneous production term is increased in amplitude by tracing the base state and patterned states through numerical continuation of the amplitude parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We will first look at some example cases to illustrate the types of branches that can be found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For all the following results we will use the following parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' d = 1/40, β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and n = 1 unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Later we will show results for the Gierer-Meinhardt model of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 where we will use the default parameters d = 20, b = 1 and a0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0 these parameters are known to give a Turing instability in the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The parameter γ which encodes for the domain length, amongst other things, will be varied between examples to show how the base state behaves as it varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In order to visualise the steady state solution branches, we will plot the maximum value on the domain of only the variable u against the parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This metric has been chosen arbitrarily in order to distinguish between solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is important to remember when interpreting these bifurcation plots that the branches are only a projection of the infinite dimensional function space onto a single scalar value for plotting purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Importantly, this means that when branches intersect at non-smooth intersections, it is not possible that this is a continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Instead, at the point of intersection each branch corresponds to completely unrelated functions (other than the fact that they share a common maximal value of u).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In many cases, we observe that there the continuation in θ can generate base states indefinitely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We can also observe two main bifurcation events on the branch containing the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The first of these is a fold at which the base state and the stable patterned state emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The second is an example of a fold, terminating the base state, but where the Turing patterned state never bifurcates from base state (they are, instead, perfectly disconnected).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' By saying ‘patterned state’ we are implying that there is a branch corresponding to a non- homogeneous but also stable steady state (indicated in blue in each figure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Finally, we demonstrate some exotic behaviour of the steady states under some 10 conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Base state with no limitation In the most simple case, starting with u⋆ 0 and growing the heterogeneous term by increasing θ in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1, no folds were found in increasing θ from 0 to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is important to note that this does not mean that the base states will extend for an arbitrarily large θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For the Schnakenberg system in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1, we find that this often occurs for large γ and in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1 use the value of γ = 900.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This corresponds with a very large domain in relation to the expected wavelength of any Turing patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Our value of γ corresponds to a value of ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 × 10−3 in the paper by Krause et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We find that in this case the base state exists by numerical continuation and furthermore that it is approximately equal to the steady state where diffusion is ignored as small which is trivial because it is clear from Equations (11) and (12) that unless θ large on the order of γ, for large γ, we simply have to leading order that u⋆ θ solves F(u) + θG(u, x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Base state fold connected to a patterned state We observe different behaviour in the base state for non-large γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' If γ is small, but not too small as to not observe Turing patterns in the homogeneous Schnaken- berg system (due to the domain size being less than the necessary critical domain length), then we observe a critical fold in the base state solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2, we use the value of γ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0, this corresponds to the case where there is just one unstable wavenumber corrsponding to a Turing pattern with just a half period on the full domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, the branch for a patterned state merges with the branch of the base state, undergoing a fold bifurcation as seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This means that the base state becomes closer and closer to a patterned state until both states are indistinguishable from each other at the fold bifurcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For heterogeneities with an amplitude θ beyond this fold (shown with a green dot in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2), we are unable to objectively define a suitable base state and therefore it becomes ambiguous as to whether or not a ‘Turing’ pattern is ob- served in the solution of the reaction-diffusion problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Indeed, whilst a steady state solution to the reaction-diffusion equation is expected beyond the fold, we do not know where this solution is by numerical continuation from θ = 0 without significant work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, there are other missing branches here and it remains unclear if any of these are reasonable candidates to be defined as a ‘base state’ at this stage and further work here is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2, you can see the stable patterned state but also an unstable patterned state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For θ = 0 there are at least two patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' You can see these states in the bifurcation diagram as mirrored functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Interestingly, if the heterogeneity is inverted in sign (θ ∈ [−1, 0]), continuation shows a mirror image of the bifurcation diagram in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 ∥u∥∞ Branch of solutions for γ = 900 Initial Solution Unstable Branch Stable Branch Figure 1: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Parameters used are characteristic of large domains relative to Turing pattern wavelength (γ = 900) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, the base state is allowed to grow continuously without a fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' On the other hand, a (blue) stable Turing ‘patterned’ state branch is also shown with some displayed distributions of u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is found by solving the full reaction-diffusion equation at θ = 0 and applying the numerical continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='10 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='81 ∥u∥∞ Branch of solutions for γ = 1 Initial Solution Fold Unstable Branch Stable Branch Stable branch Figure 2: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Parameters used are characteristic of small domains relative to Turing pattern wavelength (γ = 1) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, the base state merges with the stable patterned state at around θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The blue branches are stable patterned states but only the solid branch can be obtained by continuing through the fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The dot-dash branch can be found through continuation of a fold in the base state if decreasing θ from the θ = 0 base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 13 Base state fold not connected to a patterned state In intermediate values of γ, more curious behaviour is possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is in part because these values permit multi-wavelength heterogeneous steady states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3 we now display the bifurcation diagram for γ = 9 (analogous to a domain length increase of three-fold on the example in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The key observation in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3 is that whilst the base state branch also undergoes a fold bifurcation, the solution branch with which it merges is an unstable heterogeneous steady state (not a stable pattern).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This illustrates that the base state branch can merge with another branch which is not a branch of patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In considering Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1 where the base state seemingly continues indefinitely without folds, it is possible that a fold is present in a similar way to how it appears in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3 but at sufficiently large values of θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' If this is the case, our observations might suggest that as γ gets very large, so to does the values of θ where base state folding first occurs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Exotic behavior While the previous examples show two branches originating at θ = 0 converg- ing, this does not capture all possibilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In a more bizarre scenario, we can consider the case where γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 4, the system undergoes many folds before merging with another solution branch which contains θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Furthermore, there are stable steady states which are only present for a discrete range of θ values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To demonstrate the behaviour and the way it closes itself, it was necessary to continue in both the positive and negative θ direction from u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 Base state existence In order to have a discussion about Turing patterns, it is important for a base state to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is therefore critical to explore what determines θ+, the maxi- mum size that θ can take before a critical point such as a fold is encountered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To accomplish this we performed parameter scans on both the Schnakenberg and Gierer-Meinhardt model from Sections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Our immediate ob- servation from doing these scans is that fold bifurcations are very common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In particular, we observed more folds when the spatially-dependent source term G(u, x) varies explicitly in space with frequencies similar to that of unstable eigenvectors in the dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 5 we look at θ+ for the Schnakenberg model (a) and the Gierer- Meinhardt model (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 5 (a) we plot θ+ as the scale parameter γ and the parameter β0 in the Schnakenberg model are varied, whilst in (b) we instead vary the parameter α0 in the Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In both cases, we have plotted, in red, the curves that relate to eigenvalues Λm = maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We note that in our test problems we do not have strictly imaginary eigenvalues so along these curves ¯J0 is singular and we expect that θ+ is not finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For each constant β0 (or α0) we see that Λm = 0 at most twice because solving Λm = 0 requires solving a quadratic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Between the 14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='14 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 ∥u∥∞ Branch of solutions for γ = 9 Initial Solution Fold Unstable Branch Stable Branch Figure 3: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Parameters used are characteristic of intermediate domains relative to Turing pattern wavelength (γ = 9) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, the base state merges with an unstable heterogeneous steady state at around θ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The blue branch is a stable patterned state but the dot-dash nature of this branch indicates that it is not obtained by continuation past a fold from the steady state but instead by solving the reaction-diffusion equation with θ = 0 until steady state and using continuation from there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 15 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='06 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='04 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='15 ∥u∥∞ Branch of solutions for γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='61 Initial Solution Fold Unstable Branch Stable Branch Figure 4: Schnakenberg system bifurcation diagram for growing heterogeneity θ ∈ [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Parameters used are characteristic of narrowly defined domains relative to Turing pattern wavelength (γ = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='61) with also β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and d = 1/40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' When θ = 0, the system solves a classical Turing system where the base state is homogeneous and indicated with an ×.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As the heterogeneity θ grows, so does the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A number of examples of the spatial distribution of u along the (red) unstable base state u⋆ θ is displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Note that here the base state would only be defined between approximately -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='05 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' By continuing through each fold, we end up back at u⋆ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Interestingly, this closed loop contains three different patterned branches (blue) but not a patterned branch on approximately ±(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='03, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='04).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is expected that the patterned state obtained by solving the reaction-diffusion equation in this regime is not connected here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 √γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='95 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 β0 a) 5 10 15 20 √γ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='40 a0 b) 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 10−4 10−3 10−2 10−1 100 Size of continuation before fold Λm = 0 Figure 5: Size of continuation before a fold θ+ for (a) the Schnakenberg model and (b) the Gierer-Meinhardt model as γ is varied along with (a) β0 and (b) a0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The size of the continuation is presented in color on the log scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' All of these results are given for n = 1 in the heterogeneous term in the respective models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Red curves are drawn on the figures to correspond with Λm = maxjℜ (λj(Am)) = 0 for m = 1, 2, 3 (for curves left to right on both subfigures) where λj(Am) are eigenvalues defined in Ssection 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The background color of white indicates that no fold was found for these parameter sets and θ was allowed to grow to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 17 5 10 15 20 25 √γ 2 4 6 8 10 n a) 0 50 100 150 200 √γ 2 4 6 8 10 n b) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 2 4 6 8 Size of continuation before fold Λn = 0 Λn = 0 Inconsistency Λ2n = 0 Figure 6: Size of continuation before a fold θ+ for (a) the Schnakenberg model and (b) the Gierer-Meinhardt model as γ and n is varied for each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The size of the continuation is presented in color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Setting (a) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and (b) a0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 in each model respectively, Λn = maxjℜ (λj(An)) = 0 where λj(Am) are eigenvalues defined in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 has two solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The solution with smallest γ is shown on the blue line and the other is shown on the red line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The background color of white indicates that no fold was found for these parameter sets and θ was allowed to grow to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In (b) the green dashed line is an overlay of the red line with half of the value of n for each γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This curve surprisingly traces a pattern of small θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In (a) a red × indicates a continuation that runs into numerical difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 18 two values, we find that Λm > 0 and thus the mth mode of the homogeneous problem is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' On these curves, ¯J0 is singular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As previously established, we expect on these curves that continuation is not possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In the region shown in white, we found no upper bound in θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This region also corresponds to the subset of the parameter space where the associated homogeneous system is devoid of Turing patterning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The red curves furthest to the left correspond to m = 1 (corresponding to the onset of Turing instability in the eigenfunction cos(πx) at θ = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Note that our growing heterogeneity is also of this form (n = 1) cos(nπx) (see Equations (19) and (23)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We find that because of this a fold is very quick to form in the numerical continuation near the red curve corresponding to m = 1 but not near the onset of instability for the higher modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Small θ+ is shown by darker colors in the plot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To investigate specifically if small θ+ is associated with m = 1 because n = 1 we varied n in the Schnakenberg model from 1 to 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 6, for each n, holding β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 (a) and α0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 (b) we plot the size of the continuation θ+ as γ is increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We indicate the minimum value of γ (blue line) and the maximum value of γ (red line) for which Λn = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, for n = 1 the blue and red curves correspond to the first and second intersection of β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 (a) and α0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 (b) with the respective red curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We see for each n, the size of θ+ is very small at both zeros of Λn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' What is also surprising, if n is larger than 1 if γ is smaller than that required to make the nth mode unstable in the homogeneous problem, the continuation did not fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, we may have a Turing instability in the homogeneous problem because of an instability in the m = 1 mode but if the heterogeneity has a higher spatial frequency, say n = 2, the base state may not encounter a fold readily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As the scale parameter γ is increased beyond the the red line, we find what appears to be noise in θ+ but within this noise appears to be patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Looking specifically at the Gierer-Meinhardt model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 6 (b) we see small θ+ near the value of the maximum γ for which Λ2n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We have indicated that this is case by tracing the green dashed line over the expanse of small θ+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' You can also see this effect in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 5 (a) for n = 1 by looking at the left branch of the m = 2 red curve and seeing a noticeable dark shade.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As γ increases, the magnitude that θ can be continued before reaching a fold tends to increase, before not reaching a fold at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, numerical instabilities are prevalent in this region, as shown specifically by the red × in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 6 (a), so the accuracy of these results remains questionable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We shall look specifically at the continuation described by this red × in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The numerical results seem to become more accurate as the spatial grid becomes finer, and the maximum step size in θ becomes smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Due to the computational cost of producing parameter scan results, the accuracy of the results is here limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Numerical Issues The inconsistent numerical issue that occurs occasionally in our parameter sweeping experiments in the previous section are investigated here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In par- ticular, we investigate the red × continuation in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 6 (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this continuation 19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 ∥u1∥∞ a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='16 θ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='86 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='96 ∥u1∥∞ b) Small step Fold point Other branch Long Step Solution branches Figure 7: Plot of branches for the numerically inconsistent case highlighted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 6 (a) with varying maximum step size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In purple, the base state branch and continuation through the fold point (green dot) with very small step sizes is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In yellow, a different branch is shown and the × symbols show the updates in the continuation algorithm if the step size is too coarse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Plot (a) shows the full bifurcation diagram whilst plot (b) displays a zoomed version of the region enclosed in the red box to show detail near the fold point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' a maximum step size of 10−1 was used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is a relatively large step size, but since the pde2path package adaptively adjusts the step size as needed, it can usually make out the finer details without much increase in computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, in this case, the larger step size causes the solution to jump from one branch to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This can be seen in bifurcation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 7, where for a small step size, a fold is encountered early in the continuation, but for a large step size, the continuation jumps to a different branch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Clearly the results in this region are unreliable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is not clear how small the step size must be made in order to avoid this occurring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It does raise an interesting question though.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this example, it is pretty clear that the (yellow) branch that the coarse numerical algorithm found does not technically satisfy the numerical continuation criteria for a base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That being said, looking at the distributions on either side of the singularity, it is possible that the yellow branch perhaps should be consid- ered a base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It remains unclear if such a suitable branch can be found in for other cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, this case hints at the possibility that there may be a better definition for a base state than the one presented in this manuscript (one which can potentially always describe a unique state for all problems).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='3 Critical domain length The extension of the Turing instability to spatially-dependent RD systems allows us to distinguish between patterned states and the base states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Previously these solution states were often indistinguishable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This meant that analysing certain phenomena, such as the critical domain length, was very challenging or impossi- ble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Now that the Turing instability has a spatially-dependent analogue, we can 20 study such phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As a proof of concept, we will study how the critical domain length changes as the size of the heterogeneity in a spatially-dependent RD system increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The critical domain length has important physical impli- cations, especially in developmental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In a scenario where the domain is slowly growing, Turing patterns will arise only if the size of the domain is above the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Therefore, assessing the impact of a spatially- dependent term on the critical domain length could have key implications for these developmental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We will attempt to investigate the change in the critical domain length with respect to the size of the heterogeneity for two different reaction terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The critical domain length is encoded in a critical γ value which we will call γc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Denote γc,0 ∈ R+ as the critical γ value for the classical RD system, and γc,θ ∈ R+ as the critical γ value for the heterogeneous RD system with parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Further, define Lc,0 := √γc,0, Lc,θ := √γc,θ as the respective crit- ical domain lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Here we are accepting Lc = √γc to be a non-dimensional equivalent of the critical domain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The value of γc,θ is defined by largest γ such that the base state of Equations (7) and (8) is stable for all γ < γc,θ, but exhibits Turing instabilities for some γ > γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is infeasible to check all γ values less than some candidate value for γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Instead, we can rely on the fact that when γ = γc,0, Λm = 0 which can be calculated exactly for both the Schnakenberg model and Gierer-Meinhardt model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Instead of parameterising the base state branch with the size of the hetero- geneity θ only, we will also parameterise with respect to γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In doing so, we are assuming that a path independence result holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, the base state solution for some γ0 > 0 can be found by first finding the base state solution for another γ1 > 0, and then continuing from that base state solution with respect to γ to find the solution at γ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Initially we will use γ = γc,0 to perform the continuation, as this is known exactly and we will assume that this is close to γc,θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' After finding a base state solution with the initial γ value, we perform nu- merical continuation with respect to γ, and continue to increasing or decreasing γ until finding γc,θ for a given θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We reach the critical value γc,θ when the base state (with respect to γ but constant θ) undergoes a change of stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' If the base state found for γ = γc,0 is stable, then we will increase γ in the second stage continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Likewise, we will decrease γ if the base state is unstable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Determining whether a steady state solution is stable can be done using inbuilt methods in pde2path [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We are relying on using γ = γc,0 as an initial condition for the continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, based on recent analysis on heterogeneous RD systems, there are points where the system with θ = 0 is outside of the Turing region, so we still expect to see Turing instabilities for a sufficiently large γ [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' If the homogeneous system defined by θ = 0 is outside of the Turing region, it is unclear what the initial γ value should be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A further investigation into resolving a method for finding the critical domain length in this case should be considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 8 shows the critical domain length Lc for the Schnakenberg system for a range of θ and β0 values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The length Lc appears to be decreasing with respect 21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 β0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 Lc θ = −1/2 θ = −1/3 θ = −1/6 θ = 0 θ = 1/6 θ = 1/3 θ = 1/2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 β0 −20% −10% 0% 10% 20% Lc % change Critical domain length Figure 8: Critical domain lengths Lc,θ of the Schnakenberg system described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The critical domain length is plotted for a range of heterogeneity sizes θ as a function of the parameter β0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='3 a0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5 Lc θ = −1/2 θ = −1/3 θ = −1/6 θ = 0 θ = 1/6 θ = 1/3 θ = 1/2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='3 a0 −15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% −10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0% Lc % change Critical domain length Figure 9: Critical domain lengths Lc,θ of the Gierer-Meinhardt system described in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The critical domain length is plotted for a range of heterogene- ity sizes θ as a function of the parameter a0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 x −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 β, η a) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0 x b) β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 Production Rates and local Turing regions β η Turing Region Figure 10: Production rates for the first chemical, u, and the second chemical v, for the Schnakenberg model of Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Plots (a) and (b) describe the model with β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8 and β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Each figure also shows the regions where the system is locally within the classical Turing pattern-generating parameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' These plots are made for θ = 1/3, meaning that we found a critical domain length for the system shown in (b), but not in (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In (a), gap between the regions that are driving the Turing instability in the whole domain are further apart and it is possible that these are effectively decoupled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In this case, we would expect to find a critical domain length but significantly larger (where Turing patterns can be associated with the sub-domains which locally drive Turing patterns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' to β0 and increasing with respect to θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' On the other hand, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 9 shows that the critical domain lengths for the Gierer-Meinhardt system appears to have the reverse dependence on the parameter a0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For a given production rate, if the θ = 0 is within the Turing region, then we expect to have a critical domain length for every other θ value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is because the cosine heterogeneity will cause at least one interval of the domain to be within the Turing region locally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Thus, for sufficiently large γ, we expect to see Turing patterns [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' However, our method for finding the critical domain length in many of these cases fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Most notably, the critical domain length could not be found for any β0 value when θ = 1/2, as seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is potentially because there is a decoupling effect between two intervals which are locally within the Turing region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 10 shows the regions where the systems with θ = 1/3 and β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9 are locally within the Turing region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 8, a critical domain length could be found for β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9, but not for β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Although the Turing regions are larger in the case where β0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='8, the region between the two Turing regions is also larger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This gap between the Turing regions could have a decoupling effect where, if the two regions are close 23 enough together, they can act as one region for the purposes of forming a Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, there is enough bleed through from one region to the other to support a Turing pattern, despite having a region where no Turing pattern can be supported in between.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So in this case there would be a critical θ value after which γ must be significantly larger before observing Turing instabilities which are local to the respective Turing regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 4 Conclusions Despite being widely applicable to various problems in science, Turing insta- bilities in spatially-dependent reaction-diffusion systems have yielded very little attention in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' One of the roadblocks to understanding the be- haviour of these systems is the lack of definition for Turing instabilities when the problem depends on the spatial coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The classical definition relies on the existence of a uniform steady state solution, however no such steady state exists for spatially-dependent problems in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In reformulating the defini- tion, the problem arises of distinguishing between patterned states and the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The base state in the classical case is the uniform steady state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Since the steady state solutions of most spatially-dependent reaction-diffusion system are non-uniform, it is unclear which states we should label as ‘patterned’, and which are labelled as a ‘base state’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In order to link the spatially-dependent case with the classical case, we utilise tools from continuation to gradually in- crease the size of the heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' That is, the spatially-dependent term (or heterogeneity), is parameterised such that the heterogeneity vanishes initially, and grows to full amplitude as the introduced parameter increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Once at full amplitude, the base case solution to the reaction-diffusion equation is the solution found through continuation, with a full amplitude heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This grounds the spatially-dependent base case to the classical base case, and allows us to distinguish between patterned and non-patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' By defining the base case solution through continuation, this also provides a method for finding the base solution using numerical continuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' While we have extended the definition of the Turing base state, this does not directly extend the definition of the Turing instability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Traditionally, a Turing instability requires the base state to be stable to constant perturbations, and unstable overall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The stability to constant perturbations condition is not relevant with a spatially-dependent base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' As such, the extension of the first Turing condition is not trivial even after defining the base state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' So we discussed a few possibilities about how this condition could be extended, and the benefits of each possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Much more research can be done to analyse the properties of each of these definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' After defining the base state for heterogeneous Turing systems, it remains to know whether such base states exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We provided a variety of case studies showing that the existence of heterogeneous base states was not guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Further, we could not determine, a priori, whether base states exist for a fi- nite size heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' To investigate this further, two parameter scans were 24 performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The first varied the average production rate of the first chemical, and the length of the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The second varied the form of the heterogeneity and the length of the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Both parameter scans were tested with both the Schnakenberg and the Gierer-Meinhardt reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For each set of parameters chosen, we measured how far the branch of solutions could be continued before reaching a fold bifurcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This measures how large the heterogeneity can be before the Turing base state ceases to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The results of the parameter scans results reveal strong correlations with existing, fundamental theory from the dispersion relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Further research into a clear link between these theories is needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' For small domain lengths, it becomes even more difficult to distinguish be- tween patterned and non-patterned states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This is because the wavelength of some patterns are often similar to the length scale of the heterogeneity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The new definition allows for this distinction to be made, so systems with a small domain length can be analysed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This new distinction allowed us to analyse how the critical domain length changes for heterogeneous RD systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We numer- ically determined the critical domain length for a range of heterogeneity sizes, and a range of average production rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This serves as a proof of concept of how the new definition could be applied to a new problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' This was done for the Schnakenberg system and a Gierer-Meinhardt system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' We were able to find the critical domain length for a range of heterogeneity sizes and average pro- duction rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' In some cases, however, the method we used to find the critical domain length failed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' It is possible that there are discontinuities in the critical domain length caused by a decoupling in the domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' The method should be further developed to account for this, in an attempt to resolve the issues with the method used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' References [1] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Auchmuty and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Nicolis, Bifurcation analysis of nonlinear reaction-diffusion equations—I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Evolution equations and the steady state solutions, Bulletin of Mathematical Biology, 37 (1975), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 323–365, https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1007/bf02459519.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [2] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Brezis, Functional Analysis, Sobolev Spaces and Partial Differen- tial Equations, Springer New York, 2011, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1007/ 978-0-387-70914-7, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1007/978-0-387-70914-7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [3] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Chow and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Hale, Methods of bifurcation theory, Grundlehren der mathematischen Wissenschaften, Springer, New York, NY, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Gorder, Pattern formation from spatially heterogeneous re- action–diffusion systems, Philosophical Transactions of the Royal Soci- ety A: Mathematical, Physical and Engineering Sciences, 379 (2021), https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1098/rsta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 25 [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Koz´ak, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Gaffney, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Klika, Pattern formation in reaction-diffusion systems with piecewise kinetic modulation: An exam- ple study of heterogeneous kinetics, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' E, 100 (2019), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 042220, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1103/PhysRevE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='042220.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Krause, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Klika, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Woolley, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Gaffney, From one pattern into another: analysis of Turing patterns in heterogeneous domains via WKBJ, Journal of The Royal Society Interface, 17 (2020), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 20190621, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1098/rsif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [7] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Lawson and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Flegg, A mathematical model for the induction of the mammalian ureteric bud, Journal of Theoretical Biology, 394 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 43–56, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='jtbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [8] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' M´endez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Fedotov, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Horsthemke, Reaction-transport systems: Mesoscopic foundations, fronts, and spatial instabilities, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [9] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Page, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Maini, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Monk, Pattern formation in spatially heterogeneous Turing reaction–diffusion models, Physica D: Nonlinear Phe- nomena, 181 (2003), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 80–101, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1016/S0167-2789(03)00068-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [10] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Pickett and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Cadenasso, Landscape ecology: Spatial het- erogeneity in ecological systems, Science, 269 (1995), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 331–334, https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='331, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/abs/ https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='269.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='5222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='331.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Sheth, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Marcon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Bastida, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Junco, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Quin- tana, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Dahn, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Kmita, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Sharpe, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Ros, Hox genes regulate digit patterning by controlling the wavelength of a Turing-type mechanism, Science, 338 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 1476–1480, https:// doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1226804, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/abs/https: //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1226804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [12] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Sun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Jusup, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Jin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Wang, Pattern transitions in spatial epidemics: Mechanisms and emergent properties, Physics of Life Reviews, 19 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 43–73, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='plrev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [13] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Timm and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Okubo, Diffusion-driven instability in a predator-prey system with time-varying diffusivities, Journal of Mathematical Biology, 30 (1992), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 307–320, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1007/bf00176153.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Turing, The chemical basis of morphogenesis, Philosophical Transactions of the Royal Society of London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Series B, Biological Sciences, 237 (1952), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 37–72, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0012, https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/abs/https://royalsocietypublishing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='org/doi/ pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='0012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 26 [15] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' Uecker, Numerical Continuation and Bifurcation in Nonlinear PDEs, Society for Industrial and Applied Mathematics, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 2021, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='1137/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content='9781611976618.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} +page_content=' 27' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/n9E_T4oBgHgl3EQf8Byb/content/2301.08373v1.pdf'} diff --git a/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..e874a44bb4d5aa4b924dc44be1c99f8f79a32719 --- /dev/null +++ b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/2301.11341v1.pdf.txt @@ -0,0 +1,1062 @@ +Entanglement Purification of Hypergraph States +Lina Vandré and Otfried Gühne +Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, Germany +(Dated: January 30, 2023) +Entanglement purification describes a primitive in quantum information processing, where several +copies of noisy quantum states are distilled into few copies of nearly-pure states of high quality via +local operations and classical communication. +Especially in the multiparticle case, the task of +entanglement purification is complicated, as many inequivalent forms of pure state entanglement +exist and purification protocols need to be tailored for different target states. +In this paper we +present optimized protocols for the purification of hypergraph states, which form a family of multi- +qubit states that are relevant from several perspectives. +We start by reformulating an existing +purification protocol in a graphical language. +This allows for systematical optimization and we +present improvements in three directions. First, one can optimize the sequences of the protocol with +respect to the ordering of the parties. Second, one can use adaptive schemes, where the measurement +results obtained within the protocol are used to modify the protocols. Finally, one can improve the +protocol with respect to the efficiency, requiring fewer copies of noisy states to reach a certain target +state. +I. +INTRODUCTION +For many tasks in quantum information processing one +needs high-fidelity entangled states, but in practice most +states are noisy. Purification protocols address this prob- +lem and provide a method to transform a certain num- +ber of copies of a noisy state into single copy with high- +fidelity. The first protocols to purify Bell states were in- +troduced by Bennett et al. and Deutsch et al. [1–3]. The +concept was then further developed for different entan- +gled states, especially in the multiparticle setting. This +includes protocols for the purification of different kinds +of states, such as graph states [4, 5], or W states [6], see +also [7] for an overview. +When analysing multiparticle entanglement, the expo- +nentially increasing dimension of the Hilbert space ren- +ders the discussion of arbitrary states difficult. It is there- +fore a natural strategy to consider specific families of +states with enable a simple description. +Graph states +[8] and hypergraph states [9–11] form such families of +multi-qubit quantum states, as they can be described by +a graphical formalism. Besides this, they found appli- +cations in various contexts, ranging from quantum error +correction [12, 13], measurement-based quantum compu- +tation [14, 15], and Bell nonlocality [16–18] and state ver- +ification and self-testing [19, 20]. Note that hypergraph +states are a special case of the so-called locally maximally +entangleable states [9]. +Concerning entanglement purification, the only known +purification protocol which is valid for hypergraph states +is formulated for LME states by Carle, Kraus, Dür, and +de Vicente (CKDdV) [21]. In this paper we first ask how +this protocol can be translated to the hypergraph for- +malism. Based on this, we can then systematical develop +improvements of the protocol. +Our paper is organizes as follows. In Section II we in- +troduce our notation and review hypergraph states. We +also recall how operations like cnot and Pauli operators +act graphically. In Section III we reformulate the CK- +(a) += +(b) +Figure 1. +Examples of graphs and hypergraphs. +Figure +(a) shows a fully connected graph, which corresponds to +the three-qubit GHZ state. +In the hypergraph state for- +malism one often draws edges by circles (right) instead of +lines as in the graph state formalism (left). +The hyper- +graph state corresponding to the hypergraph in the lower fig- +ure (b) of the figure is local unitary equivalent to the state +|H⟩ = (|000⟩ + |001⟩ + |010⟩ + |111⟩) /2. +DdV purification protocol in a graphical manner, provid- +ing a different language to understand it. Based on this, +we propose systematic extensions in Section IV, which +naturally arise from the graphical formalism. We first +propose two approaches to make the protocol applicable +to noisy states where the original CKDdV protocol fails. +Later we propose a method to requiring fewer copies of +noisy states to reach a certain target state. In Section V +we extend the protocol to more qubits. We summarize +and conclude in Section VI. +II. +HYPERGRAPH STATES +In this section we present a short introduction to the +class of hypergraph states and the description of transfor- +mations between them. Readers familiar with the topic +arXiv:2301.11341v1 [quant-ph] 26 Jan 2023 + +2 +may directly skip to the next section. +A. +Definition of Hypergraph States +A hypergraph H = (V, E) is a set V of vertices and +hyperedges e ∈ E connecting them. Contrary to a nor- +mal graph, the edges in a hypergraph may connect more +than two vertices; examples of hypergraphs are given in +Figure 1. +Hypergraph states are multi-qubit quantum states, +where the vertices and hyperedges of the hypergraph +H = (V, E) represent qubits and entangling gates, re- +spectively. The state |H⟩, corresponding to a hypergraph +H = (V, E) is defined as +|H⟩ = +� +e∈E +Ce |+⟩⊗|V | ≡ Uph |+⟩⊗|V | , +(1) +where Ce is a generalized CZ gate, acting on qubits in +the edge e as Ce = 1e −2 |11 . . . 1⟩⟨11 . . . 1|e. If an edge +contains only a single vertex, |e| = 1, then Ce reduces to +the Pauli-Z operator, and for two-vertex edges Ce is just +the standard two-qubit controlled phase gate. A detailed +discussion on hypergraph state properties can be found +in Refs. [22, 23]. +Similarly as for graph states, there is an alternative +definition using so-called stabilizing operators. First, one +can define for each vertex i a stabilizer operator, +Si = UphXiU † +ph, +(2) +where Xi denotes the first Pauli matrix acting on the i-th +qubit and Uph denotes the collection of phase gates as in +Eq. (1). Note that here only the gates with i ∈ e matter. +The stabilizing operators are non-local hermitian observ- +ables with eigenvalues ±1, they commute and generate +an abelian group, the so-called stabilizer. +Then, a hypergraph state may be defined as a com- +mon eigenvector of all stabilizing operators Si. Here, one +has to fix the eigenvalues of the Si. Often, the state de- +fined in Equation (1) is called |H00...0⟩, as it is a common +eigenstate of the Si with eigenvalue +1. +By applying +Pauli-Z gates on the state, one obtains states orthogo- +nal to |H00...0⟩, where some of the eigenvalues are flipped +to −1. By applying all possible combinations of Z gates, +one obtains a basis: {|Hk⟩ = Zk |H0⟩}, where k is a bi- +nary multi-index and Zk = � +v∈V Zkv +v . In this notation, +it holds that Si |Hk⟩ = (−1)ki |Hk⟩. Hence, |Hk⟩ is an +eigenstate of Si with eigenvalue (−1)ki. It is convenient +to write arbitrary states in the hypergraph basis: +ρ = +� +k,k′ +ck,k′ |Hk⟩⟨Hk′| . +(3) +Later we will purify states in this form to the state |H0⟩. +1 +2 +3 +4 +5 +6 +1 +2 +3 +4 +5 +6 +cnot1,4 +Figure 2. Example of a cnot1,4 gate (with control qubit 1 +and target qubit 4) performed on a hypergraph state. Left: +Hypergraph with vertex set V += {1, . . . , 6} and edge set +E = {{1}, {1, 2, 3}, {3}, {4}, {4, 5, 6}}. Right: Hypergraph af- +ter applying cnot1,4. A new edge {1, 5, 6} appeared while the +edge {1} vanished. The effect of applying the cnot1,4 gate is +to introduce or delete edges from the set E4 = {{1}, {1, 5, 6}}. +The underlying rule is the following [24]: One takes the so- +called adjacency A(4) of the target qubit t = 4, where one +first considers all edges that contain t, but then removes t +from it. Here, we have A(4) = {{}, {5, 6}}. Then, E4 con- +tains all edges which are unions of edges from A(4) and the +edge {1} of the control qubit c = 1. +B. +Operations on Hypergraph States +Many operations on hypergraph states can be repre- +sented in a graphical manner. In the following we explain +the effect of applying Pauli gates X and Z, measuring in +the corresponding basis σx and σz, discuss how to rep- +resent the cnot gate graphically [24], and introduce the +reduction operator Pv1,v2 which we will need later. Note +that in the following for Pauli matrices we use X and Z +to denote the corresponding unitary transformation and +σx and σz to denote the measurements. We only discuss +transformations that are needed in the current paper, +an overview on other transformations can be found in +Ref. [23]. +We have already mentioned the action of the unitary +transformation Zv on some qubit v. +It adds the edge +e = {v} to the set of edges E, if it was not contained +before, or removes it otherwise. For example applying +Z2 and Z3 to the left hypergraph state in Figure 2 would +add a circle at vertex 2 and remove the one at vertex 3. +The unitary transformation Xv on a vertex v of a +hypergraph state |H⟩ corresponding to the hypergraph +H = (V, E) is given by +Xv |H⟩ = +� +e∈E +Ce +� +e′∈A(v) +Ce′ |+⟩⊗|V | , +(4) +where A(v) is the adjacency of vertex v. This is a set of +edges defined as +A(v) = {e − {v} | e ∈ E with v ∈ e}. +(5) +In words, to build the adjacency A(v) one first takes set +of edges that contain v and the removes v from them. +Examples of local transformations X are given in Fig- +ure 3. +Let us discuss now the graphical description of some +local measurements on hypergraph states. +In order to + +3 +1 +2 +3 +1 +2 +3 +1 +2 +3 +X3 +X2 +Figure 3. Application of X operators on qubits 3 and 2. We +first apply X3 on the left graph. The adjacency of qubit 3 +is given by A(3) = {{1, 2}}. +This new edge is shown by +the blue dashed line in the middle graph. +We then apply +X2 to the middle graph. The adjacency of qubit 2 is given +by A(2) = {{1}, {1, 3}}. These new edges are shown by the +dotted purple lines in the right graph. +derive the post-measurement state after measuring vertex +v, we can expand the state |H⟩ at this vertex as +|H⟩ = +1 +√ +2 +� +|0⟩v |H0⟩ ± 1 +√ +2 |1⟩v |H1⟩ +� +, +(6) +where |H0⟩ and |H1⟩ are new hypergraph states with ver- +tex set V0 = V1 = V \ v and edge sets E0 = {e ∈ E | +v /∈ e} and E1 = E0 ∪ A(v) [23]. After measuring σz, we +therefore either get the state |H0⟩ or |H1⟩. Measuring σx +leads to a superposition of these two states and often the +post-measurement state is then not a hypergraph sate +anymore. +In our case, we only measure σx on qubits +which are separated from other parts of the system. that +is where |H0⟩ = |H1⟩. +Applying a cnotct gate on a hypergraph state H, +where c is the control and t the target, introduces or +deletes hyperedges of the set Et = {et ∪ c | et ∈ A(t)}. +The new edge set after applying cnotct is given by +E′ = E△Et, +(7) +where A△B = A∪B\A∩B is the symmetric difference of +two sets. Since C2 +e = 1, double edges cancel out. There- +fore, the operation cnotct deletes edges which are in E +and Et and introduces edges which are only in Et. For +example in the left part of Figure 2, the neighbourhood +of vertex 4 is given by N(4) = {{}, {5, 6}} and therefore +E4 = {{1}, {1, 5, 6}}. +Finally, another operator which will be important later +is the reduction operator Pv1,v2, which maps two qubits to +a single qubit. In the computational basis, the reduction +operator is written as +Pv1,v2 = |0⟩⟨00| + |1⟩⟨11| . +(8) +It merges two vertices v1, v2 to one which we call v2. +This action changes edges which contain v1 into edges +which contain v2 and deletes edges e, e′, with e ̸= e′ but +(e \ {v1}) = (e′ \ {v2}). The new edge set will therefore +be +E′ = ({e ∈ E|v1 /∈ e}△{f ∪ {v2}|f ∈ A(v1)}). +An example is shown in Figure 4. +III. +THE CKDDV PURIFICATION PROTOCOL +In this section we discuss the only known protocol +which works for hypergraph states [21], we will refer to it +1 +2 +3 +4 +5 +6 +1 +2 +4 +5 +6 +1 +4 +5 +6 +1 +4 +5 +6 +P3,6 +P2,5 += +Figure 4. Application of the reduction projector P3,6 and P2,5. +The projector merges two vertices and its corresponding edges +to one. In the first step, we merge vertices 3 and 6. In the +second step we merge vertices 2 and 5. This results in two +times the same edge, the green dashed edge {1, 5, 6} and the +edge which was initially {1, 2, 3} and such double edges cancel +out. +as the CKDdV protocol. Originally, it was formulated for +more general LME states. We first reformulate the pu- +rification protocol in a graphical manner, which makes it +intuitively understandable. Based on this reformulation, +we can then propose improvements. +In the simplest case, the aim is to purify a three-qubit +state ρ to a pure hypergraph state, chosen to be the state +|H0⟩ = C{123} |+⟩⊗3. The state is distributed between +three parties, Alice, Bob, and Charlie. +In the follow- +ing, we explicitly describe the sub-protocol which reduces +noise on Alice’s qubit. There are equivalent sub-protocols +on Bob’s and Charlie’s qubits. The protocol is performed +on two copies of a state ρ. Alice holds qubit a1 of the +first state and qubit a2 of the second state, equivalently +for Bob and Charlie. +The key idea of the protocol is to induce a transforma- +tion on the basis elements of the form +|Hi,j,k⟩ |Hi′,j′,k′⟩ → δi,i′ |Hi,j+j′,k+k′⟩ , +(9) +where δi,i′ denotes the Kronecker delta. +This means +that the sub-protocol compares the indices i, i′ on Al- +ice’s qubits, and the state is discarded when i ̸= i′. This +map drives a general state as in Eq. (3) closer to the de- +sired hypergraph state. In detail, the sub-protocol which +implements this transition is given by: +Protocol 1 (CKDdV protocol). +(0) Alice, Bob, and Charlie share two copies of a state. +(i) Alice applies a local cnota1,a2 gate on her qubits. +(ii) Bob and Charlie apply local reduction operators Pv1,v2 +on their qubits. +(iii) Alice measures qubits a1 in the σx basis. She keeps +the state, if the outcome is “+1”, and discards it other- +wise. +In Figure 5 it is shown how the basis elements +|H000⟩ |Hi00⟩ transform. + +4 +1 +2 +3 +4 +5 +6 +1 +2 +3 +4 +5 +6 +1 +4 +5 +6 +1 +4 +5 +6 +(i) cnot1,4 +(ii) P2,5, P3,6 += +Figure 5. +The CKDdV protocol, as described in in Proto- +col 1. In the figure, the transformation of the two basis el- +ements |H000⟩ |H100⟩ is shown. In step (i), Alice performs a +local cnot1,4 gate. Then, Bob and Charlie apply local re- +duction operators P2,5 and P3,6, respectively. Double edges +cancel out, so that the green dashed line and the former edge +{1, 2, 3} vanish. In step (iii), Alice measures qubit 1 in the +σx basis. If there is a single-qubit edge on vertex 1, as the +orange one in this figure, her measurement outcome will be +“−1” and therefore the state gets discarded. If one ignores +all orange single-qubit edges in the figure, this corresponds +to the transformation of the basis elements |H000⟩ |H000⟩. In +this case, Alice’s measurement outcome will be “+1” and the +remaining state |H000⟩ is kept. +In order to purify the full state, one needs to choose +a sequence of sub-protocols in which these sub-protocols +are applied on different parties. In Ref. [21], the sequence +ABC-CAB-BCA was favoured, as it seems to perform +better than just repeating the sequence ABC. The reason +is that the qubit of Charlie becomes more noisy due to +the back action from the sub-protocols purifying Alice’s +and Bob’s qubits. +IV. +IMPROVING THE PROTOCOL +PERFORMANCE +In order to purify towards one state of a certain fidelity, +one needs a number of input states, which depends expo- +nentially on the number iteration, as in each run of the +protocol a certain fraction of states is discarded. There- +fore it is of high interest to apply the subprotocols in a +sequence which works as efficient as possible. As already +pointed out by Carle et al. [21], it depends on the in- +put state which sequence is the most advantageous and +it is not trivial to see which sequence is optimal. Carle et +al. decided to use the sequence S = ABC − CAB − BCA +in all their applications, since it performs well in many +cases. In the following we will ask whether the proposed +sequence really is the best and how we can potentially +find better sequences. +One should also notice that in step (ii) of the protocol a +large fraction of states is discarded. The operator Pv1,v2 +corresponds to a positive map, which maps two qubits, +which are in the same state, to one qubit and both qubits +are discarded, if they are in different states. This can be +seen as one outcome of a measurement. So, in the second +part of this section we will ask whether one can reduce +the amount of discarded states. +A. +Improved and Adaptive Sequences +Consider a noisy three-qubit state ρ(p), where p is +a noise parameter for some noise model, which should +be purified to the pure hypergraph state |H000⟩⟨H000|. +Clearly, for a fixed sequence S there is a maximal amount +of noise until which the state can still be purified and +there is a regime, where one cannot purify it any more. +Interestingly, for some parameter regimes where the +state cannot be purified, the purification protocol does +not converge towards a state with random noise, but to- +wards a specific state which is a mixture of two states: +either +1 +2(|H000⟩⟨H000| + |H001⟩⟨H001|), +1 +2(|H000⟩⟨H000| + +|H010⟩⟨H010|), or 1 +2(|H000⟩⟨H000|+|H100⟩⟨H100|). This ob- +servation gives insights about how good the purification +works on different parties. The protocol eliminates noise +on two parties but fails on the third party. For example +if we apply sequence S = ABC, in the cases we tested, +there is a regime, where the state does not get purified +but converges to 1 +2(|H000⟩⟨H000| + |H001⟩⟨H001|). +This is consistent with the explanation given in Ref. +[21] that the purification has an disadvantage on Charlie’s +site. This may be explained as follows: By performing +the protocol at one party, one aims to reduce noise on +this party. As a unwanted side effect, one increases noise +on the other parties. This happens because if there is +noise on the first input state, the local reduction operator +will “copy” it to the second state (see Equation (9)). So, +when choosing sequence S = ABC, one increases the +noise on Charlie’s qubit two times before purifying it the +first time. +How well the protocol performs on each party can be +analysed using the measurement statistics obtained in +step (iii) of the protocol. +The probability to measure +outcome “+1” in step (iii) on a qubit belonging to a cer- +tain party gives insights, how much noise the state on +this party has. On the perfect target state, one does not +detect any noise and therefore measures outcome “+1” +with probability equal to one. If one applies the protocol +to the state 1 +2(|H000⟩⟨H000| + |H001⟩⟨H001|), however, one +obtains outcome “+1” with a probability equal to one +or 0.5, depending on which subprotocol was applied. If +it was the subprotocol where Alice’s or Bob’s qubits are +measured in step (iii), the probability i s equal to one. +If it was the subprotocol where measure Charlie’s qubit, +was measured the probability is 0.5. So, by evaluating +the probabilities to measure outcome “+1” in step (iii) +of the protocol, one can adapt the protocol on the given +state. +All in all, we use two approaches to find better se- +quences. +The first approach is to find an optimal se- + +5 +Ewn(ρ, p) +Edeph(ρ, p) +Edepo(ρ, p) +S1 ABC-CBA-ABC ABC-CBA-CBA +ABC-CAB-BCA +S2 BAB-CAB-ABA CCC-ACB-CBC BBB-BCB-BBB-BAB +⃗a +(0.33, 0.35, 0.32) (0.35, 0.43, 0.21) +(0.35, 0.34, 0.31) +b +0.35 +0.39 +0.44 +Table I. Sequences S1, S2, approximate weight vectors ⃗a, and +bounds b for states with three kinds of noise. Explanation see +text. +quence, which allows a high noise tolerance and will be +applied later without further observation of the statis- +tics. The second approach uses two sequences where we +switch from one to the other depending on the measure- +ment outcomes during the process. The first approach +helps to find sequences which are more efficient also for +purification of states with a low noise level. The second +approach gives a method to purify states which would +not be purifyable otherwise. +To find an advantageous sequence in the first approach, +we consider input states, which are slightly too noisy to +be purified with the standard sequence from [21]. +We +need sufficiently many states, so that we can estimate +the probability to measure “±1” in step (iii) of the proto- +col. If the purification works, the probability to measure +“−1” tends to zero. Otherwise it tends to 0.5. Knowing +the probability at each step of the protocol, and there- +fore on which party the purification fails, we can update +our sequence such that the new sequence gives an advan- +tage to the party which failed before. This process can +be repeated until we do not find a better sequence of a +certain length. We restricted ourselves to sequences of +length nine. The best sequence we find in this way we +call S1. +With the second approach, we give a way to purify +states which can not be purified by sequence S1 because +their initial fidelity is slightly beyond the threshold. We +start using sequence S1 and switch to sequence S2 de- +pending on the measurement outcomes of step (iii). Our +switching condition is the following: After each measure- +ment of step (iii), we evaluate the probability to measure +“−1” for the given party. Based on the last three proba- +bilities associated to the same party, we take a decision to +switch or not. For ⃗x being the vector of this three prob- +abilities, where x3 is the newest probability, we switch, if +the product of the vectors ⃗a⃗x exceed a bound b where ⃗a +is a weight vector. +To see the efficiency of our methods, we consider dif- +ferent noise models. We analyze the influence of global +white noise described by the channel +Ewn(ρ, p) = pρ + 1 − p +2n +1, +(10) +where n is the number of qubits. +In this section, the +number of states is n = 3. We further analyse local noise +channels given by E(ρ, p) = �n +i=1 Ei(ρ, p), where Ei is +pmin from [21] pmin from S1 +pmin from +adaptive protocol +Ewn(ρ, p) +0.6007 +0.5878 +0.5876 +Edeph(ρ, p) +0.8013 +0.7803 +0.7747 +Edepo(ρ, p) +0.8136 +0.8136 +0.8132 +Table II. Noise thresholds pmin reproduced from Ref. [21], +gained from our sequences S1 (see Table I), and for the adap- +tive approach. In the case of Edepo(ρ, p) we found that the se- +quence from Ref. [21] was already the best sequence of length +9. Therefore there is no improvement of pmin in this case. +either the dephasing channel +Ei +deph(ρ, p) = pρ + 1 − p +2 +(ρ + ZiρZi) +(11) +or the depolarizing channel +Ei +depo(ρ, p) = pρ + 1 − p +4 +(ρ + XiρXi + YiρYi + ZiρZi). +(12) +The sequences, weight vectors and bounds we found +to be optimal are given in Table I. To compare the ap- +proaches, we give the noise thresholds found in Ref. [21], +obtained by our sequence S1, and by the adaptive ap- +proach in Table II. The sequences we found are also bet- +ter in other perspectives. If we apply the new sequences +S1 nine rounds on given input states, we see that the +output states have a higher fidelity then after purifying +the same state nine rounds using the sequence given in +Ref. [21]. +B. +Recycling of Discarded States +If one wishes to purify a state using the CKDdV pro- +tocol one needs a high number of input states in order to +obtain one state of a certain fidelity. Let us count how +many states we need to have one state after applying the +protocol once. In step (0) of the protocol, one takes two +input states. One does not loose states by applying cnot +in step (i). By applying the reduction operator Pv1,v2, ap- +proximately 1 +2 of the pairs are lost. Since this operator +is applied on two parties in step (ii), one needs approx- +imately four pairs. In step (iii), one measures outcome +“+1” with a probability ⩽ 1. This probability depends on +the fidelity of the states and increases with increasing fi- +delity. So, in total, approximately 8 = 23 input states are +required to obtain one output state. To prepare a state +for which we need to apply the protocol m times, we +need more than 8m input stats. To purify, for example, a +state of initial fidelity 0.93 to a state of fidelity of 0.994, +we need three steps. The required number of input states +to obtain one output state is roughly 8.73 ≈ 660. If we +want to purify the same state to a fidelity of 0.999, which +we reach after six steps, we need about 8.386 ≈ 346 000 +input states to get one new state. + +6 +It is natural to try to use the available quantum states +more efficiently. In step (ii) of the CKDdV protocol, one +performs a projective measurement and considers only +one outcome, namely Pv1,v2, which we get with probabil- +ity approximately 1 +2. We suggest to use the states which +were discarded because we measured something differ- +ent than Pv1,v2. The second reduction operator P ⊥ +v1,v2 is +perpendicular to Pv1,v2 and defined as +P ⊥ +v1,v2 = |0⟩⟨10| + |1⟩⟨01| = Pv1,v2(Xv1 ⊗ 1v2). +(13) +As Pv1,v2, the operator P ⊥ +v1,v2 is a positive map. It maps +two qubits, which are in different states, to one qubit. +This can be seen as a different measurement outcome +than Pv1,v2, or one may interpret the set {Pv1,v2, P ⊥ +v1,v2} +as a quantum instrument. +In the original CKDdV protocol one keeps the state +only after measuring Pb1,b2Pc1,c2. +There are three +more +possible +measurement +outcomes: +Pb1,b2P ⊥ +c1,c2, +P ⊥ +b1,b2Pc1,c2, and P ⊥ +b1,b2P ⊥ +c1,c2. In the cases of measuring +P ⊥ +v1,v2 on at least one party, one obtains a post measure- +ment state on which one can apply some corrections to +get a state, which is similar to the input state. One can +collect these states and further purify them. +So, one can write down a modified protocol of the CK- +DdV protocol. Here, we give the sub-protocol which re- +duces noise on Alice’s qubits. The sub-protocols for Bob +and Charlie work equivalently. +Protocol 2 (Improved CKDdV protocol). +(0) Alice, Bob, and Charlie share two copies of a state. +(i) Alice applies a local cnota1,a2 gate on her qubits. +(ii) Bob and Charlie perform a measurement on their +qubits and measure the local reduction operators Pv1,v2 +and P ⊥ +v1,v2. +If the measurement outcome for Bob and +Charlie was Pv1,v2, continue with step (iiia). Else, con- +tinue with (iiib) +(iiia) After Bob and Charlie both measured Pv1,v2, Alice +measures qubits a1 in the σx basis. She keeps the state, +if the outcome is “+1”, and discards it otherwise. +(iiib) After measuring P ⊥ +v1,v2 on at least one pair of Bob +and Charlie’s qubits, Alice measures her qubit a1 in the +σz basis. If she measure “+1”, she keeps the state as it is. +Otherwise, Bob and Charlie apply some local unitaries, +which depend on the combinations of measurement out- +comes in step (ii) and are given in Table III. +The key idea is that output states from step (iiib) can +be collected and further purified. In case of measuring +P ⊥ +v1,v2 on at least one party, the protocol gives us a tran- +sition +|Hi,j,k⟩ |Hi′,j′,k′⟩ → |Hi′,j+j′,k+k′⟩ . +(14) +The resulting state has in general a lower fidelity than +the input state. This is caused by the same reason of +“copying” noise, as discussed before. Since in the consid- +ered case the protocol does not reduce noise, the fidelity +drops. +Measurement local correction local correction +outcomes +Bob +Charlie +Pb1,b2P ⊥ +c1,c2 +Z +1 +P ⊥ +b1,b2Pc1,c2 +1 +Z +P ⊥ +b1,b2P ⊥ +c1,c2 +Z +Z +Table III. In Protocol 2 step (iiib), Alice measures her qubit +a1 in the Z basis. If her outcome is “−1”, Bob and Charlie +have to apply local corrections to their qubits. +The local +corrections depend on their measurement outcomes from step +(ii) and are given in this table. +The first case is shown in +Figure 6. +1 +2 +3 +4 +5 +6 +1 +2 +3 +4 +5 +6 +1 +4 +5 +6 +1 +4 +5 +6 +4 +5 +6 +4 +5 +6 +4 +5 +6 +(i) cnot1,4 +(ii) P2,5, P ⊥ +3,6 += +(iiib) σ(1) +z += +1 +σ(1) +z += −1 +(iiib) Z5 +Figure 6. Modified Protocol 2 for the same initial states as +shown in Figure 5 for the case to measure Pb1,b2P ⊥ +c1,c2 in step +(ii). +Alice performs a σ(1) +z -measurement on her qubit 1 of +the state in the second raw. If she gets outcome “+1” in step +(iiib), the resulting state is the same as the initial state (qubits +4, 5 and 6). If she gets outcome “−1”, Bob’s qubit 5 has a +decoration, which he needs to correct. After Bob applied a +local Z5 unitary on qubit 5, again the resulting state is the +same as the initial state (qubits 4, 5 and 6). Note that this +is only the case, if there is no noise on qubit 2 and 3, as +shown in this figure. In general one obtains the state given in +Equation (14). +An example for Protocol 2 is shown in Figure 6, where +we assume the case that Bob measures P2,5 and Char- +lie measures P ⊥ +3,6. In this case, the local correction af- +ter measuring outcome “−1” is applying a unitary Z5 at +qubit 5. +Given a certain number of input states which we want +to purify to a target fidelity, we obtain more output states + +7 +0.9800 +0.9825 +0.9850 +0.9875 +0.9900 +0.9925 +0.9950 +0.9975 +1.0000 +Initial Fidelity F0 +3.00 +3.25 +3.50 +3.75 +4.00 +4.25 +4.50 +4.75 +5.00 +Increase of number of output states in +Figure 7. +Effect of using Protocol 2 instead of the orig- +inal CKDdV protocol. +The input states are given by +Ewn(|H0⟩⟨H0| , p). We first apply Protocol 1 three times and +computed the fidelity F3 of the output states. Then, we apply +Protocol 2 on the same input states and compare how many +more output states of fidelity ⩾ F3 we get. The figure displays +the increase of output states by using Protocol 2, depending +on the fidelity F0 of the input states. +of the desired fidelity if we follow Protocol 2 instead of +the original CKDdV protocol. The effect in the cases w e +tested turned out, however, to be small. As input states, +we chose the state |H000⟩⟨H000| mixed with white noise. +We first applied Protocol 1 three times, that is, once on +each party, and computed the fidelity F3 of the output +states. Then, we applied Protocol 2 on the same input +states and compared how many more output states of fi- +delity ⩾ F3 we get. In Figure 7 we show how much the +number of output states increase by using Protocol 2, de- +pending on the fidelity F0 of the input states. In the cho- +sen cases, we get approximately 4 � more output states +from using Protocol 2 instead of the CKDdV protocol. +V. +GENERALISATION TO MORE QUBITS +The methods described here can also be applied to +states with more qubits and different arrangement of +edges. We restrict our attention to hypergraphs which +are k-regular and k-colorable. A hypergraph is k-regular, +if all edges e ∈ E have order k and it is k-colorable, if it is +possible to color vertices of a hypergraph using k colors +such that no two vertices of the same color share a com- +mon edge. For example, the hypergraph states shown in +Figures 2 and 8 are 3-colorable and 3-regular. In this +section we discuss purification protocols to hypergraph +states of more than 3 qubits which are 3-colorable and +3-regular. In the following, we will denote the colors by +A, B, and C. +The protocols can be generalised by letting all parties +holding qubits of color A do what was described for Al- +pmin from +pmin +sequence S1 +SCKDdV +from S1 +Ewn(ρ3, p) +0.6007 +0.5878 ABC-CBA-ABC +Ewn(ρ4, p) +0.4633 +0.4396 ABC-ACB-BCA +Ewn(ρ5, p) +0.3901 +0.3486 ABC-ABC-CBA +Ewn(ρ6, p) +0.3341 +0.3017 ABC-ACB-BAC* +Edeph(ρ3, p) +0.8013 +0.7803 ABC-CBA-CBA +Edeph(ρ4, p) +0.8014 +0.7803 ABC-CBA-CBA* +Edeph(ρ5, p) +0.8014 +0.7803 ABC-CBA-CBA* +Edeph(ρ6, p) +0.8014 +0.7803 ABC-CBA-CBA* +Edepo(ρ3, p) +0.8137 +0.8136 ABC-CAB-BCA +Edepo(ρ4, p) +0.8306 +0.8122 BAC-CBA-CAB +Edepo(ρ5, p) +0.8358 +0.8128 ACB-BCA-CBA +Edepo(ρ6, p) +0.8144 +0.8121 ABC-CBA-CAB +Table IV. Noise thresholds pmin for the sequence SCKDdV pro- +posed in Ref. [21] and new sequences S1. The index of the +state gives the number of qubits. In the case of Edepo(ρ3, p) +we found that the sequence from Ref. [21] was already the +best sequence of length 9. Therefore there is no improvement +of pmin. When we found (non-trivially) different sequences of +the same length, we marked them with a star (*). +ice before. In the same way, parties holding a qubit of +color B or C do what was described for Bob or Charlie, +respectively. For a explicit formulation of the generalized +protocol, see Ref. [21]. +We analysed linear three-colorable states with up to six +qubits under the influence of global white noise, dephas- +ing and depolarisation. That is the states to which we +want to purify are U123U234 |+⟩⊗4, U123U234U345 |+⟩⊗5, +and U123U234U345U456 |+⟩⊗6, as shown in Figure 8. We +compare the noise threshold pmin for the sequence pro- +posed in Ref. [21] with new sequences S1, found using +methods described in Section IV A. +Our results are shown in Table IV. One sees that in +the case of white noise for more qubits, the differences in +the noise threshold pmin become more significant. There- +fore, especially in these cases it is more relevant to find +good sequences. +For the tested states with dephasing +and depolarisation noise, the noise threshold is constant +or varies slightly, respectively. +VI. +CONCLUSION AND OUTLOOK +In this paper we discussed protocols for entanglement +purification of hypergraph states. First, we reformulated +the CKDdV protocol in a graphical language. This offers +a new way to understand the protocol, furthermore, it al- +lows to search for systematic extensions. Consequently, +we introduced several improvements of the original pro- +tocol. +These improvements are based on different se- +quences, adaptive schemes, as well as methods to recycle +some of the unused states. +While these modifications +are conceptually interesting and can indeed improve the + +8 +1 +2 +3 +4 +A1 +B +C +A2 +1 +2 +3 +4 +5 +A1 +B1 +C +A2 +B2 +1 +2 +3 +4 +5 +6 +A1 +B1 +C1 +A2 +B2 +C2 +Figure 8. Linear 3-colorable and 3-regular hypergraph states +with 4, 5, and 6 qubits. +The colors are denoted by A, B, +and C. Note that two qubits which have the same color, for +example qubits 1 and 4, still belong to different parties. Since +we are restricted to local operations, we can only perform +operations on qubits of the same party, that is in general not +on qubits of the same color. +performance in various examples, the amount of the im- +provement in realistic examples seems rather modest. +The problem of finding efficient sequences is also rel- +evant for purification protocols for other states and was +raised for example in Ref. +[4] in the context of two- +colorable graph states. The methods developed here can +be applied to this case, but also to all purification proto- +cols which follow the concept introduced by Bennett et +al. [1]. +A further open question is how the effects of our meth- +ods scale with the number of qubits. Another open ques- +tion is whether Protocol 2 can be further improved so +that the effect gets more significant. +VII. +ACKNOWLEDGMENTS +We thank Mariami Gachechiladze, Kiara Hansenne, +Jan L. Bönsel, and Fabian Zickgraf for discussions. +This work was supported by the Deutsche Forschungsge- +meinschaft (DFG, German Research Foundation, project +numbers 447948357 and 440958198), the Sino-German +Center for Research Promotion (Project M-0294), the +ERC (Consolidator Grant 683107/TempoQ), the Ger- +man Ministry of Education and Research (Project +QuKuK, BMBF Grant No. +16KIS1618K) and the +Stiftung der Deutschen Wirtschaft. +[1] C. H. Bennett, H. J. Bernstein, S. Popescu, and B. Schu- +macher, Phys. Rev. A 53, 2046 (1996). +[2] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. +Wootters, Phys. Rev. A 54, 3824 (1996). +[3] D. Deutsch, +A. Ekert, +R. Jozsa, +C. Macchiavello, +S. Popescu, and A. Sanpera, Phys. Rev. Lett. 77, 2818 +(1996). +[4] H. Aschauer, W. Dür, and H.-J. Briegel, Phys. Rev. A +71, 012319 (2005). +[5] C. Kruszynska, A. Miyake, H. J. Briegel, and W. Dür, +Phys. Rev. A 74, 052316 (2006). +[6] A. Miyake and H. J. Briegel, Phys. Rev. Lett. 95, 220501 +(2005). +[7] W. Dür and H. J. Briegel, Reports on Progress in Physics +70, 1381 (2007). +[8] M. Hein, J. Eisert, and H. J. Briegel, Phys. Rev. A 69, +062311 (2004). +[9] C. Kruszynska and B. Kraus, Phys. Rev. A 79, 052304 +(2009). +[10] R. Qu, J. Wang, Z.-s. Li, and Y.-r. Bao, Phys. Rev. A +87, 022311 (2013). +[11] M. Rossi, M. Huber, D. Bruß, and C. Macchiavello, New +J. Phys. 15, 113022 (2013). +[12] P. W. Shor, Phys. Rev. A 52, R2493 (1995). +[13] T. Wagner, H. Kampermann, and D. Bruß, J. Phys. A +Math. Theor. 51, 125302 (2018). +[14] R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, +5188 (2001). +[15] M. Gachechiladze, O. Gühne, and A. Miyake, Phys. Rev. +A 99, 052304 (2019). +[16] V. Scarani, A. Ací n, E. Schenck, and M. Aspelmeyer, +Phys. Rev. A 71, 042325 (2005). +[17] O. Gühne, G. Tóth, P. Hyllus, and H. J. Briegel, Phys. +Rev. Lett. 95, 120405 (2005). +[18] M. Gachechiladze, C. Budroni, and O. Gühne, Phys. Rev. +Lett. 116, 062321 (2016). +[19] T. Morimae, Y. Takeuchi, and M. Hayashi, Phys. Rev. A +96, 062321 (2017). +[20] F. Baccari, R. Augusiak, I. Š upić, J. Tura, and A. Acín, +Phys. Rev. Lett. 124, 020402 (2020). +[21] T. Carle, B. Kraus, W. Dür, and J. I. de Vicente, Phys. +Rev. A 87, 012328 (2013). +[22] O. Gühne, M. Cuquet, F. E. S. Steinhoff, T. Moroder, +M. Rossi, D. Bruß, B. Kraus, and C. Macchiavello, J. +Phys. A Math. Theor. 47, 335303 (2014). +[23] M. Gachechiladze, Quantum Hypergraph States and the +Theory of Multiparticle Entanglement, Dissertation, Uni- +versity of Siegen (2019). +[24] M. Gachechiladze, N. Tsimakuridze, and O. Gühne, J. +Phys. A Math. Theor. 50, 19LT01 (2017). + diff --git a/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..55b4d684e2d15f989d5ef9dab8ab068e4cf8ff1a --- /dev/null +++ b/nNFIT4oBgHgl3EQfuCsf/content/tmp_files/load_file.txt @@ -0,0 +1,610 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf,len=609 +page_content='Entanglement Purification of Hypergraph States Lina Vandré and Otfried Gühne Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Straße 3, 57068 Siegen, Germany (Dated: January 30, 2023) Entanglement purification describes a primitive in quantum information processing, where several copies of noisy quantum states are distilled into few copies of nearly-pure states of high quality via local operations and classical communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Especially in the multiparticle case, the task of entanglement purification is complicated, as many inequivalent forms of pure state entanglement exist and purification protocols need to be tailored for different target states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this paper we present optimized protocols for the purification of hypergraph states, which form a family of multi- qubit states that are relevant from several perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We start by reformulating an existing purification protocol in a graphical language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This allows for systematical optimization and we present improvements in three directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' First, one can optimize the sequences of the protocol with respect to the ordering of the parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Second, one can use adaptive schemes, where the measurement results obtained within the protocol are used to modify the protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Finally, one can improve the protocol with respect to the efficiency, requiring fewer copies of noisy states to reach a certain target state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' INTRODUCTION For many tasks in quantum information processing one needs high-fidelity entangled states, but in practice most states are noisy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Purification protocols address this prob- lem and provide a method to transform a certain num- ber of copies of a noisy state into single copy with high- fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The first protocols to purify Bell states were in- troduced by Bennett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' and Deutsch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [1–3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The concept was then further developed for different entan- gled states, especially in the multiparticle setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This includes protocols for the purification of different kinds of states, such as graph states [4, 5], or W states [6], see also [7] for an overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' When analysing multiparticle entanglement, the expo- nentially increasing dimension of the Hilbert space ren- ders the discussion of arbitrary states difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' It is there- fore a natural strategy to consider specific families of states with enable a simple description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Graph states [8] and hypergraph states [9–11] form such families of multi-qubit quantum states, as they can be described by a graphical formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Besides this, they found appli- cations in various contexts, ranging from quantum error correction [12, 13], measurement-based quantum compu- tation [14, 15], and Bell nonlocality [16–18] and state ver- ification and self-testing [19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Note that hypergraph states are a special case of the so-called locally maximally entangleable states [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Concerning entanglement purification, the only known purification protocol which is valid for hypergraph states is formulated for LME states by Carle, Kraus, Dür, and de Vicente (CKDdV) [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this paper we first ask how this protocol can be translated to the hypergraph for- malism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Based on this, we can then systematical develop improvements of the protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Our paper is organizes as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Section II we in- troduce our notation and review hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We also recall how operations like cnot and Pauli operators act graphically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Section III we reformulate the CK- (a) = (b) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Examples of graphs and hypergraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Figure (a) shows a fully connected graph, which corresponds to the three-qubit GHZ state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the hypergraph state for- malism one often draws edges by circles (right) instead of lines as in the graph state formalism (left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The hyper- graph state corresponding to the hypergraph in the lower fig- ure (b) of the figure is local unitary equivalent to the state |H⟩ = (|000⟩ + |001⟩ + |010⟩ + |111⟩) /2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' DdV purification protocol in a graphical manner, provid- ing a different language to understand it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Based on this, we propose systematic extensions in Section IV, which naturally arise from the graphical formalism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We first propose two approaches to make the protocol applicable to noisy states where the original CKDdV protocol fails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Later we propose a method to requiring fewer copies of noisy states to reach a certain target state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Section V we extend the protocol to more qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We summarize and conclude in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' HYPERGRAPH STATES In this section we present a short introduction to the class of hypergraph states and the description of transfor- mations between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Readers familiar with the topic arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='11341v1 [quant-ph] 26 Jan 2023 2 may directly skip to the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Definition of Hypergraph States A hypergraph H = (V, E) is a set V of vertices and hyperedges e ∈ E connecting them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Contrary to a nor- mal graph, the edges in a hypergraph may connect more than two vertices;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' examples of hypergraphs are given in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Hypergraph states are multi-qubit quantum states, where the vertices and hyperedges of the hypergraph H = (V, E) represent qubits and entangling gates, re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The state |H⟩, corresponding to a hypergraph H = (V, E) is defined as |H⟩ = � e∈E Ce |+⟩⊗|V | ≡ Uph |+⟩⊗|V | , (1) where Ce is a generalized CZ gate, acting on qubits in the edge e as Ce = 1e −2 |11 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 1⟩⟨11 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 1|e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If an edge contains only a single vertex, |e| = 1, then Ce reduces to the Pauli-Z operator, and for two-vertex edges Ce is just the standard two-qubit controlled phase gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A detailed discussion on hypergraph state properties can be found in Refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Similarly as for graph states, there is an alternative definition using so-called stabilizing operators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' First, one can define for each vertex i a stabilizer operator, Si = UphXiU † ph, (2) where Xi denotes the first Pauli matrix acting on the i-th qubit and Uph denotes the collection of phase gates as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Note that here only the gates with i ∈ e matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The stabilizing operators are non-local hermitian observ- ables with eigenvalues ±1, they commute and generate an abelian group, the so-called stabilizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Then, a hypergraph state may be defined as a com- mon eigenvector of all stabilizing operators Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Here, one has to fix the eigenvalues of the Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Often, the state de- fined in Equation (1) is called |H00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='0⟩, as it is a common eigenstate of the Si with eigenvalue +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' By applying Pauli-Z gates on the state, one obtains states orthogo- nal to |H00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='0⟩, where some of the eigenvalues are flipped to −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' By applying all possible combinations of Z gates, one obtains a basis: {|Hk⟩ = Zk |H0⟩}, where k is a bi- nary multi-index and Zk = � v∈V Zkv v .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this notation, it holds that Si |Hk⟩ = (−1)ki |Hk⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Hence, |Hk⟩ is an eigenstate of Si with eigenvalue (−1)ki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' It is convenient to write arbitrary states in the hypergraph basis: ρ = � k,k′ ck,k′ |Hk⟩⟨Hk′| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (3) Later we will purify states in this form to the state |H0⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 1 2 3 4 5 6 1 2 3 4 5 6 cnot1,4 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Example of a cnot1,4 gate (with control qubit 1 and target qubit 4) performed on a hypergraph state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Left: Hypergraph with vertex set V = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' , 6} and edge set E = {{1}, {1, 2, 3}, {3}, {4}, {4, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Right: Hypergraph af- ter applying cnot1,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A new edge {1, 5, 6} appeared while the edge {1} vanished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The effect of applying the cnot1,4 gate is to introduce or delete edges from the set E4 = {{1}, {1, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The underlying rule is the following [24]: One takes the so- called adjacency A(4) of the target qubit t = 4, where one first considers all edges that contain t, but then removes t from it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Here, we have A(4) = {{}, {5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Then, E4 con- tains all edges which are unions of edges from A(4) and the edge {1} of the control qubit c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Operations on Hypergraph States Many operations on hypergraph states can be repre- sented in a graphical manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the following we explain the effect of applying Pauli gates X and Z, measuring in the corresponding basis σx and σz, discuss how to rep- resent the cnot gate graphically [24], and introduce the reduction operator Pv1,v2 which we will need later.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Note that in the following for Pauli matrices we use X and Z to denote the corresponding unitary transformation and σx and σz to denote the measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We only discuss transformations that are needed in the current paper, an overview on other transformations can be found in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We have already mentioned the action of the unitary transformation Zv on some qubit v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' It adds the edge e = {v} to the set of edges E, if it was not contained before, or removes it otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For example applying Z2 and Z3 to the left hypergraph state in Figure 2 would add a circle at vertex 2 and remove the one at vertex 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The unitary transformation Xv on a vertex v of a hypergraph state |H⟩ corresponding to the hypergraph H = (V, E) is given by Xv |H⟩ = � e∈E Ce � e′∈A(v) Ce′ |+⟩⊗|V | , (4) where A(v) is the adjacency of vertex v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This is a set of edges defined as A(v) = {e − {v} | e ∈ E with v ∈ e}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (5) In words, to build the adjacency A(v) one first takes set of edges that contain v and the removes v from them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Examples of local transformations X are given in Fig- ure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Let us discuss now the graphical description of some local measurements on hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In order to 3 1 2 3 1 2 3 1 2 3 X3 X2 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Application of X operators on qubits 3 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We first apply X3 on the left graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The adjacency of qubit 3 is given by A(3) = {{1, 2}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This new edge is shown by the blue dashed line in the middle graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We then apply X2 to the middle graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The adjacency of qubit 2 is given by A(2) = {{1}, {1, 3}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' These new edges are shown by the dotted purple lines in the right graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' derive the post-measurement state after measuring vertex v, we can expand the state |H⟩ at this vertex as |H⟩ = 1 √ 2 � |0⟩v |H0⟩ ± 1 √ 2 |1⟩v |H1⟩ � , (6) where |H0⟩ and |H1⟩ are new hypergraph states with ver- tex set V0 = V1 = V \\ v and edge sets E0 = {e ∈ E | v /∈ e} and E1 = E0 ∪ A(v) [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' After measuring σz, we therefore either get the state |H0⟩ or |H1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Measuring σx leads to a superposition of these two states and often the post-measurement state is then not a hypergraph sate anymore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In our case, we only measure σx on qubits which are separated from other parts of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' that is where |H0⟩ = |H1⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Applying a cnotct gate on a hypergraph state H, where c is the control and t the target, introduces or deletes hyperedges of the set Et = {et ∪ c | et ∈ A(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The new edge set after applying cnotct is given by E′ = E△Et, (7) where A△B = A∪B\\A∩B is the symmetric difference of two sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Since C2 e = 1, double edges cancel out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' There- fore, the operation cnotct deletes edges which are in E and Et and introduces edges which are only in Et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For example in the left part of Figure 2, the neighbourhood of vertex 4 is given by N(4) = {{}, {5, 6}} and therefore E4 = {{1}, {1, 5, 6}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Finally, another operator which will be important later is the reduction operator Pv1,v2, which maps two qubits to a single qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the computational basis, the reduction operator is written as Pv1,v2 = |0⟩⟨00| + |1⟩⟨11| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (8) It merges two vertices v1, v2 to one which we call v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This action changes edges which contain v1 into edges which contain v2 and deletes edges e, e′, with e ̸= e′ but (e \\ {v1}) = (e′ \\ {v2}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The new edge set will therefore be E′ = ({e ∈ E|v1 /∈ e}△{f ∪ {v2}|f ∈ A(v1)}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' An example is shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' THE CKDDV PURIFICATION PROTOCOL In this section we discuss the only known protocol which works for hypergraph states [21], we will refer to it 1 2 3 4 5 6 1 2 4 5 6 1 4 5 6 1 4 5 6 P3,6 P2,5 = Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Application of the reduction projector P3,6 and P2,5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The projector merges two vertices and its corresponding edges to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the first step, we merge vertices 3 and 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the second step we merge vertices 2 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This results in two times the same edge, the green dashed edge {1, 5, 6} and the edge which was initially {1, 2, 3} and such double edges cancel out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' as the CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Originally, it was formulated for more general LME states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We first reformulate the pu- rification protocol in a graphical manner, which makes it intuitively understandable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Based on this reformulation, we can then propose improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the simplest case, the aim is to purify a three-qubit state ρ to a pure hypergraph state, chosen to be the state |H0⟩ = C{123} |+⟩⊗3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The state is distributed between three parties, Alice, Bob, and Charlie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the follow- ing, we explicitly describe the sub-protocol which reduces noise on Alice’s qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' There are equivalent sub-protocols on Bob’s and Charlie’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The protocol is performed on two copies of a state ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Alice holds qubit a1 of the first state and qubit a2 of the second state, equivalently for Bob and Charlie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The key idea of the protocol is to induce a transforma- tion on the basis elements of the form |Hi,j,k⟩ |Hi′,j′,k′⟩ → δi,i′ |Hi,j+j′,k+k′⟩ , (9) where δi,i′ denotes the Kronecker delta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This means that the sub-protocol compares the indices i, i′ on Al- ice’s qubits, and the state is discarded when i ̸= i′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This map drives a general state as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (3) closer to the de- sired hypergraph state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In detail, the sub-protocol which implements this transition is given by: Protocol 1 (CKDdV protocol).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (0) Alice, Bob, and Charlie share two copies of a state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (i) Alice applies a local cnota1,a2 gate on her qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (ii) Bob and Charlie apply local reduction operators Pv1,v2 on their qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (iii) Alice measures qubits a1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' She keeps the state, if the outcome is “+1”, and discards it other- wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Figure 5 it is shown how the basis elements |H000⟩ |Hi00⟩ transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 4 1 2 3 4 5 6 1 2 3 4 5 6 1 4 5 6 1 4 5 6 (i) cnot1,4 (ii) P2,5, P3,6 = Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The CKDdV protocol, as described in in Proto- col 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the figure, the transformation of the two basis el- ements |H000⟩ |H100⟩ is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In step (i), Alice performs a local cnot1,4 gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Then, Bob and Charlie apply local re- duction operators P2,5 and P3,6, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Double edges cancel out, so that the green dashed line and the former edge {1, 2, 3} vanish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In step (iii), Alice measures qubit 1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If there is a single-qubit edge on vertex 1, as the orange one in this figure, her measurement outcome will be “−1” and therefore the state gets discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If one ignores all orange single-qubit edges in the figure, this corresponds to the transformation of the basis elements |H000⟩ |H000⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this case, Alice’s measurement outcome will be “+1” and the remaining state |H000⟩ is kept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In order to purify the full state, one needs to choose a sequence of sub-protocols in which these sub-protocols are applied on different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21], the sequence ABC-CAB-BCA was favoured, as it seems to perform better than just repeating the sequence ABC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The reason is that the qubit of Charlie becomes more noisy due to the back action from the sub-protocols purifying Alice’s and Bob’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' IMPROVING THE PROTOCOL PERFORMANCE In order to purify towards one state of a certain fidelity, one needs a number of input states, which depends expo- nentially on the number iteration, as in each run of the protocol a certain fraction of states is discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' There- fore it is of high interest to apply the subprotocols in a sequence which works as efficient as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' As already pointed out by Carle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21], it depends on the in- put state which sequence is the most advantageous and it is not trivial to see which sequence is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Carle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' decided to use the sequence S = ABC − CAB − BCA in all their applications, since it performs well in many cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the following we will ask whether the proposed sequence really is the best and how we can potentially find better sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' One should also notice that in step (ii) of the protocol a large fraction of states is discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The operator Pv1,v2 corresponds to a positive map, which maps two qubits, which are in the same state, to one qubit and both qubits are discarded, if they are in different states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This can be seen as one outcome of a measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' So, in the second part of this section we will ask whether one can reduce the amount of discarded states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Improved and Adaptive Sequences Consider a noisy three-qubit state ρ(p), where p is a noise parameter for some noise model, which should be purified to the pure hypergraph state |H000⟩⟨H000|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Clearly, for a fixed sequence S there is a maximal amount of noise until which the state can still be purified and there is a regime, where one cannot purify it any more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Interestingly, for some parameter regimes where the state cannot be purified, the purification protocol does not converge towards a state with random noise, but to- wards a specific state which is a mixture of two states: either 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|), 1 2(|H000⟩⟨H000| + |H010⟩⟨H010|), or 1 2(|H000⟩⟨H000|+|H100⟩⟨H100|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This ob- servation gives insights about how good the purification works on different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The protocol eliminates noise on two parties but fails on the third party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For example if we apply sequence S = ABC, in the cases we tested, there is a regime, where the state does not get purified but converges to 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This is consistent with the explanation given in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] that the purification has an disadvantage on Charlie’s site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This may be explained as follows: By performing the protocol at one party, one aims to reduce noise on this party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' As a unwanted side effect, one increases noise on the other parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This happens because if there is noise on the first input state, the local reduction operator will “copy” it to the second state (see Equation (9)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' So, when choosing sequence S = ABC, one increases the noise on Charlie’s qubit two times before purifying it the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' How well the protocol performs on each party can be analysed using the measurement statistics obtained in step (iii) of the protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The probability to measure outcome “+1” in step (iii) on a qubit belonging to a cer- tain party gives insights, how much noise the state on this party has.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' On the perfect target state, one does not detect any noise and therefore measures outcome “+1” with probability equal to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If one applies the protocol to the state 1 2(|H000⟩⟨H000| + |H001⟩⟨H001|), however, one obtains outcome “+1” with a probability equal to one or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5, depending on which subprotocol was applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If it was the subprotocol where Alice’s or Bob’s qubits are measured in step (iii), the probability i s equal to one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If it was the subprotocol where measure Charlie’s qubit, was measured the probability is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' So, by evaluating the probabilities to measure outcome “+1” in step (iii) of the protocol, one can adapt the protocol on the given state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' All in all, we use two approaches to find better se- quences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The first approach is to find an optimal se- 5 Ewn(ρ, p) Edeph(ρ, p) Edepo(ρ, p) S1 ABC-CBA-ABC ABC-CBA-CBA ABC-CAB-BCA S2 BAB-CAB-ABA CCC-ACB-CBC BBB-BCB-BBB-BAB ⃗a (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='33, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='32) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='43, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='21) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='35, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='34, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='31) b 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='44 Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Sequences S1, S2, approximate weight vectors ⃗a, and bounds b for states with three kinds of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Explanation see text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' quence, which allows a high noise tolerance and will be applied later without further observation of the statis- tics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The second approach uses two sequences where we switch from one to the other depending on the measure- ment outcomes during the process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The first approach helps to find sequences which are more efficient also for purification of states with a low noise level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The second approach gives a method to purify states which would not be purifyable otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' To find an advantageous sequence in the first approach, we consider input states, which are slightly too noisy to be purified with the standard sequence from [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We need sufficiently many states, so that we can estimate the probability to measure “±1” in step (iii) of the proto- col.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If the purification works, the probability to measure “−1” tends to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Otherwise it tends to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Knowing the probability at each step of the protocol, and there- fore on which party the purification fails, we can update our sequence such that the new sequence gives an advan- tage to the party which failed before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This process can be repeated until we do not find a better sequence of a certain length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We restricted ourselves to sequences of length nine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The best sequence we find in this way we call S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' With the second approach, we give a way to purify states which can not be purified by sequence S1 because their initial fidelity is slightly beyond the threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We start using sequence S1 and switch to sequence S2 de- pending on the measurement outcomes of step (iii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Our switching condition is the following: After each measure- ment of step (iii), we evaluate the probability to measure “−1” for the given party.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Based on the last three proba- bilities associated to the same party, we take a decision to switch or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For ⃗x being the vector of this three prob- abilities, where x3 is the newest probability, we switch, if the product of the vectors ⃗a⃗x exceed a bound b where ⃗a is a weight vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' To see the efficiency of our methods, we consider dif- ferent noise models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We analyze the influence of global white noise described by the channel Ewn(ρ, p) = pρ + 1 − p 2n 1, (10) where n is the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this section, the number of states is n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We further analyse local noise channels given by E(ρ, p) = �n i=1 Ei(ρ, p), where Ei is pmin from [21] pmin from S1 pmin from adaptive protocol Ewn(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='6007 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5878 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5876 Edeph(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7803 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7747 Edepo(ρ, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8136 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8132 Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Noise thresholds pmin reproduced from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21], gained from our sequences S1 (see Table I), and for the adap- tive approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the case of Edepo(ρ, p) we found that the se- quence from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] was already the best sequence of length 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Therefore there is no improvement of pmin in this case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' either the dephasing channel Ei deph(ρ, p) = pρ + 1 − p 2 (ρ + ZiρZi) (11) or the depolarizing channel Ei depo(ρ, p) = pρ + 1 − p 4 (ρ + XiρXi + YiρYi + ZiρZi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (12) The sequences, weight vectors and bounds we found to be optimal are given in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' To compare the ap- proaches, we give the noise thresholds found in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21], obtained by our sequence S1, and by the adaptive ap- proach in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The sequences we found are also bet- ter in other perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If we apply the new sequences S1 nine rounds on given input states, we see that the output states have a higher fidelity then after purifying the same state nine rounds using the sequence given in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Recycling of Discarded States If one wishes to purify a state using the CKDdV pro- tocol one needs a high number of input states in order to obtain one state of a certain fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Let us count how many states we need to have one state after applying the protocol once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In step (0) of the protocol, one takes two input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' One does not loose states by applying cnot in step (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' By applying the reduction operator Pv1,v2, ap- proximately 1 2 of the pairs are lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Since this operator is applied on two parties in step (ii), one needs approx- imately four pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In step (iii), one measures outcome “+1” with a probability ⩽ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This probability depends on the fidelity of the states and increases with increasing fi- delity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' So, in total, approximately 8 = 23 input states are required to obtain one output state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' To prepare a state for which we need to apply the protocol m times, we need more than 8m input stats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' To purify, for example, a state of initial fidelity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='93 to a state of fidelity of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='994, we need three steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The required number of input states to obtain one output state is roughly 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='73 ≈ 660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If we want to purify the same state to a fidelity of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='999, which we reach after six steps, we need about 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='386 ≈ 346 000 input states to get one new state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 6 It is natural to try to use the available quantum states more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In step (ii) of the CKDdV protocol, one performs a projective measurement and considers only one outcome, namely Pv1,v2, which we get with probabil- ity approximately 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We suggest to use the states which were discarded because we measured something differ- ent than Pv1,v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The second reduction operator P ⊥ v1,v2 is perpendicular to Pv1,v2 and defined as P ⊥ v1,v2 = |0⟩⟨10| + |1⟩⟨01| = Pv1,v2(Xv1 ⊗ 1v2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (13) As Pv1,v2, the operator P ⊥ v1,v2 is a positive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' It maps two qubits, which are in different states, to one qubit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This can be seen as a different measurement outcome than Pv1,v2, or one may interpret the set {Pv1,v2, P ⊥ v1,v2} as a quantum instrument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the original CKDdV protocol one keeps the state only after measuring Pb1,b2Pc1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' There are three more possible measurement outcomes: Pb1,b2P ⊥ c1,c2, P ⊥ b1,b2Pc1,c2, and P ⊥ b1,b2P ⊥ c1,c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the cases of measuring P ⊥ v1,v2 on at least one party, one obtains a post measure- ment state on which one can apply some corrections to get a state, which is similar to the input state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' One can collect these states and further purify them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' So, one can write down a modified protocol of the CK- DdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Here, we give the sub-protocol which re- duces noise on Alice’s qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The sub-protocols for Bob and Charlie work equivalently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Protocol 2 (Improved CKDdV protocol).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (0) Alice, Bob, and Charlie share two copies of a state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (i) Alice applies a local cnota1,a2 gate on her qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (ii) Bob and Charlie perform a measurement on their qubits and measure the local reduction operators Pv1,v2 and P ⊥ v1,v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If the measurement outcome for Bob and Charlie was Pv1,v2, continue with step (iiia).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Else, con- tinue with (iiib) (iiia) After Bob and Charlie both measured Pv1,v2, Alice measures qubits a1 in the σx basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' She keeps the state, if the outcome is “+1”, and discards it otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (iiib) After measuring P ⊥ v1,v2 on at least one pair of Bob and Charlie’s qubits, Alice measures her qubit a1 in the σz basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If she measure “+1”, she keeps the state as it is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Otherwise, Bob and Charlie apply some local unitaries, which depend on the combinations of measurement out- comes in step (ii) and are given in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The key idea is that output states from step (iiib) can be collected and further purified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In case of measuring P ⊥ v1,v2 on at least one party, the protocol gives us a tran- sition |Hi,j,k⟩ |Hi′,j′,k′⟩ → |Hi′,j+j′,k+k′⟩ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' (14) The resulting state has in general a lower fidelity than the input state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This is caused by the same reason of “copying” noise, as discussed before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Since in the consid- ered case the protocol does not reduce noise, the fidelity drops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Measurement local correction local correction outcomes Bob Charlie Pb1,b2P ⊥ c1,c2 Z 1 P ⊥ b1,b2Pc1,c2 1 Z P ⊥ b1,b2P ⊥ c1,c2 Z Z Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Protocol 2 step (iiib), Alice measures her qubit a1 in the Z basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If her outcome is “−1”, Bob and Charlie have to apply local corrections to their qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The local corrections depend on their measurement outcomes from step (ii) and are given in this table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The first case is shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 1 2 3 4 5 6 1 2 3 4 5 6 1 4 5 6 1 4 5 6 4 5 6 4 5 6 4 5 6 (i) cnot1,4 (ii) P2,5, P ⊥ 3,6 = (iiib) σ(1) z = +1 σ(1) z = −1 (iiib) Z5 Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Modified Protocol 2 for the same initial states as shown in Figure 5 for the case to measure Pb1,b2P ⊥ c1,c2 in step (ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Alice performs a σ(1) z -measurement on her qubit 1 of the state in the second raw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If she gets outcome “+1” in step (iiib), the resulting state is the same as the initial state (qubits 4, 5 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' If she gets outcome “−1”, Bob’s qubit 5 has a decoration, which he needs to correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' After Bob applied a local Z5 unitary on qubit 5, again the resulting state is the same as the initial state (qubits 4, 5 and 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Note that this is only the case, if there is no noise on qubit 2 and 3, as shown in this figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In general one obtains the state given in Equation (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' An example for Protocol 2 is shown in Figure 6, where we assume the case that Bob measures P2,5 and Char- lie measures P ⊥ 3,6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this case, the local correction af- ter measuring outcome “−1” is applying a unitary Z5 at qubit 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Given a certain number of input states which we want to purify to a target fidelity, we obtain more output states 7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9800 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9825 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9900 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='9975 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='0000 Initial Fidelity F0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='25 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='50 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='75 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='00 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='25 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='50 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='75 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='00 Increase of number of output states in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Effect of using Protocol 2 instead of the orig- inal CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The input states are given by Ewn(|H0⟩⟨H0| , p).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We first apply Protocol 1 three times and computed the fidelity F3 of the output states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Then, we apply Protocol 2 on the same input states and compare how many more output states of fidelity ⩾ F3 we get.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The figure displays the increase of output states by using Protocol 2, depending on the fidelity F0 of the input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' of the desired fidelity if we follow Protocol 2 instead of the original CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The effect in the cases w e tested turned out, however, to be small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' As input states, we chose the state |H000⟩⟨H000| mixed with white noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We first applied Protocol 1 three times, that is, once on each party, and computed the fidelity F3 of the output states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Then, we applied Protocol 2 on the same input states and compared how many more output states of fi- delity ⩾ F3 we get.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In Figure 7 we show how much the number of output states increase by using Protocol 2, de- pending on the fidelity F0 of the input states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the cho- sen cases, we get approximately 4 � more output states from using Protocol 2 instead of the CKDdV protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' GENERALISATION TO MORE QUBITS The methods described here can also be applied to states with more qubits and different arrangement of edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We restrict our attention to hypergraphs which are k-regular and k-colorable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A hypergraph is k-regular, if all edges e ∈ E have order k and it is k-colorable, if it is possible to color vertices of a hypergraph using k colors such that no two vertices of the same color share a com- mon edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For example, the hypergraph states shown in Figures 2 and 8 are 3-colorable and 3-regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In this section we discuss purification protocols to hypergraph states of more than 3 qubits which are 3-colorable and 3-regular.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the following, we will denote the colors by A, B, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The protocols can be generalised by letting all parties holding qubits of color A do what was described for Al- pmin from pmin sequence S1 SCKDdV from S1 Ewn(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='6007 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='5878 ABC-CBA-ABC Ewn(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='4633 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='4396 ABC-ACB-BCA Ewn(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='3901 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='3486 ABC-ABC-CBA Ewn(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='3341 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='3017 ABC-ACB-BAC* Edeph(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7803 ABC-CBA-CBA Edeph(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7803 ABC-CBA-CBA* Edeph(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7803 ABC-CBA-CBA* Edeph(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='7803 ABC-CBA-CBA* Edepo(ρ3, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8137 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8136 ABC-CAB-BCA Edepo(ρ4, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8306 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8122 BAC-CBA-CAB Edepo(ρ5, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8358 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8128 ACB-BCA-CBA Edepo(ρ6, p) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='8121 ABC-CBA-CAB Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Noise thresholds pmin for the sequence SCKDdV pro- posed in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] and new sequences S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The index of the state gives the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the case of Edepo(ρ3, p) we found that the sequence from Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] was already the best sequence of length 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Therefore there is no improvement of pmin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' When we found (non-trivially) different sequences of the same length, we marked them with a star (*).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' ice before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' In the same way, parties holding a qubit of color B or C do what was described for Bob or Charlie, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For a explicit formulation of the generalized protocol, see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We analysed linear three-colorable states with up to six qubits under the influence of global white noise, dephas- ing and depolarisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' That is the states to which we want to purify are U123U234 |+⟩⊗4, U123U234U345 |+⟩⊗5, and U123U234U345U456 |+⟩⊗6, as shown in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' We compare the noise threshold pmin for the sequence pro- posed in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] with new sequences S1, found using methods described in Section IV A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Our results are shown in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' One sees that in the case of white noise for more qubits, the differences in the noise threshold pmin become more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' There- fore, especially in these cases it is more relevant to find good sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' For the tested states with dephasing and depolarisation noise, the noise threshold is constant or varies slightly, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' CONCLUSION AND OUTLOOK In this paper we discussed protocols for entanglement purification of hypergraph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' First, we reformulated the CKDdV protocol in a graphical language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This offers a new way to understand the protocol, furthermore, it al- lows to search for systematic extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Consequently, we introduced several improvements of the original pro- tocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' These improvements are based on different se- quences, adaptive schemes, as well as methods to recycle some of the unused states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' While these modifications are conceptually interesting and can indeed improve the 8 1 2 3 4 A1 B C A2 1 2 3 4 5 A1 B1 C A2 B2 1 2 3 4 5 6 A1 B1 C1 A2 B2 C2 Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Linear 3-colorable and 3-regular hypergraph states with 4, 5, and 6 qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The colors are denoted by A, B, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Note that two qubits which have the same color, for example qubits 1 and 4, still belong to different parties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Since we are restricted to local operations, we can only perform operations on qubits of the same party, that is in general not on qubits of the same color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' performance in various examples, the amount of the im- provement in realistic examples seems rather modest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The problem of finding efficient sequences is also rel- evant for purification protocols for other states and was raised for example in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [4] in the context of two- colorable graph states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' The methods developed here can be applied to this case, but also to all purification proto- cols which follow the concept introduced by Bennett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A further open question is how the effects of our meth- ods scale with the number of qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Another open ques- tion is whether Protocol 2 can be further improved so that the effect gets more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' ACKNOWLEDGMENTS We thank Mariami Gachechiladze, Kiara Hansenne, Jan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bönsel, and Fabian Zickgraf for discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' This work was supported by the Deutsche Forschungsge- meinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ), the Ger- man Ministry of Education and Research (Project QuKuK, BMBF Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 16KIS1618K) and the Stiftung der Deutschen Wirtschaft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [1] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bennett, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bernstein, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Popescu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Schu- macher, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 53, 2046 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [2] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bennett, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' DiVincenzo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Smolin, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Wootters, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 54, 3824 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Deutsch, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Ekert, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Jozsa, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Macchiavello, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Popescu, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Sanpera, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 77, 2818 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [4] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Aschauer, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Dür, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 71, 012319 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [5] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kruszynska, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Miyake, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Dür, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 74, 052316 (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Miyake and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 95, 220501 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [7] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Dür and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Reports on Progress in Physics 70, 1381 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [8] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Hein, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Eisert, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 69, 062311 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kruszynska and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kraus, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 79, 052304 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [10] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Qu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='-s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bao, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 87, 022311 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rossi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Huber, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bruß, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Macchiavello, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 15, 113022 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Shor, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 52, R2493 (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Wagner, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kampermann, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bruß, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 51, 125302 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Raussendorf and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 86, 5188 (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gachechiladze, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gühne, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Miyake, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 99, 052304 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [16] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Scarani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Ací n, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Schenck, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Aspelmeyer, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 71, 042325 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [17] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gühne, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Tóth, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Hyllus, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Briegel, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 95, 120405 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gachechiladze, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Budroni, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gühne, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 116, 062321 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [19] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Morimae, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Takeuchi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Hayashi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 96, 062321 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [20] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Baccari, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Augusiak, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Š upić, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Tura, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Acín, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 124, 020402 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [21] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Carle, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kraus, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Dür, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' de Vicente, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A 87, 012328 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [22] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gühne, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Cuquet, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Steinhoff, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Moroder, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Rossi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Bruß, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Kraus, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Macchiavello, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 47, 335303 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [23] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gachechiladze, Quantum Hypergraph States and the Theory of Multiparticle Entanglement, Dissertation, Uni- versity of Siegen (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gachechiladze, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Tsimakuridze, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Gühne, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' A Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' Theor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} +page_content=' 50, 19LT01 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/nNFIT4oBgHgl3EQfuCsf/content/2301.11341v1.pdf'} diff --git a/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..673dad0fcc470e53704e12dee93d67eb88ecc78d --- /dev/null +++ b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/2301.01628v1.pdf.txt @@ -0,0 +1,1717 @@ +1 +Task-Effective Compression of Observations for the Centralized Control +of a Multi-agent System Over Bit-Budgeted Channels +Arsham Mostaani, Student Member, IEEE, Thang X. Vu, Senior Member, IEEE, +Symeon Chatzinotas, Fellow Member, IEEE, and Bj¨orn Ottersten, Fellow Member, IEEE +Abstract—We consider a task-effective quantization problem +that arises when multiple agents are controlled via a centralized +controller (CC). While agents have to communicate their obser- +vations to the CC for decision-making, the bit-budgeted commu- +nications of agent-CC links may limit the task-effectiveness of the +system which is measured by the system’s average sum of stage +costs/rewards. As a result, each agent should compress/quantize +its observation such that the average sum of stage costs/rewards +of the control task is minimally impacted. We address the +problem of maximizing the average sum of stage rewards by +proposing two different Action-Based State Aggregation (ABSA) +algorithms that carry out the indirect and joint design of control +and communication policies in the multi-agent system. While +the applicability of ABSA-1 is limited to single-agent systems, it +provides an analytical framework that acts as a stepping stone +to the design of ABSA-2. ABSA-2 carries out the joint design +of control and communication for a multi-agent system. We +evaluate the algorithms - with average return as the performance +metric - using numerical experiments performed to solve a +multi-agent geometric consensus problem. The numerical results +are concluded by introducing a new metric that measures the +effectiveness of communications in a multi-agent system. +Index Terms—Semantic communications, task-effective data +compression, goal-oriented communications, communications for +machine learning, multi-agent systems, reinforcement learning. +I. INTRODUCTION +As 5G is rolling out, a wave of new applications such +as the internet of things (IoT), industrial internet of things +(IIoT) and autonomous vehicles is emerging. It is projected +that by 2030, approximately 30 billion IoT devices will be +connected [1]. With the proliferation of non-human types of +connected devices, the focus of the communications design is +shifting from traditional performance metrics, e.g., bit error +rate and latency of communications to the semantic and +task-oriented performance metrics such as meaning/semantic +error rate [2], [3] and the timeliness of information [4]. +To evaluate how efficiently the network resources are being +utilized, one could traditionally measure the sum rate of a +network whereas in the era of the cyber-physical systems, +given the resource constraints of the network, we want to +understand how effectively one can conduct a (number of) +task(s) in the desired way [5], [6]. We are witnessing a +paradigm shift in communication systems where the targeted +performance metrics of the traditional systems are no longer +valid. This imposes new grand challenges in designing the +communications towards the eventual task-effectiveness [6]. +The authors are with the Centre for Security Reliability and Trust, Uni- +versity of Luxembourg, Luxembourg. Emails: {arsham.mostaani, thang.vu, +symeon.chatzinotas, bjorn.ottersten}@uni.lu +This work is supported by European Research Council (ERC) via the project +AGNOSTIC (Grant agreement ID: 742648). +Environment +Environment +Controller 2 +Control +Control +Controller 2 +Sensor 1 +Sensor 2 +Comm. +Comm. +Local Observation +Local Observation +Reward/ cost +Reward/ cost +a) +b) +Local Observation +Stage reward/ cost +Local Observation +Stage reward/ cost +Figure 1. Task-effective communications for a) an estimation vs. b) a control +task - the orange dashed box is detailed in Fig. 2 and Fig. 3. +This line of research is also driven partly due to the success +of new machine learning technologies/ algorithms under the +title of ”emergent communications” in multi-agent systems [7]. +Transfer of these new technologies/ideas to communication en- +gineering is anticipated to have a disruptive effect in multiple +domains of the design of communication systems. +According to Shannon and Weaver, communication prob- +lems can be divided into three levels [8]: (i) technical problem: +given channel and network constraints, how accurately can +the communication symbols/bits be transmitted? (ii) semantic +problem: given channel and network constraints, how accu- +rately the communication symbols can deliver the desired +meaning? (iii) effectiveness problem: given channel and net- +work constraints, how accurately the communication symbols +can help to fulfil the desired task? While the traditional com- +munication design addresses the technical problem, recently, +the semantic problem [2], [3], [5], [9], [10] as well as the +effectiveness problem [6], [11]–[18] have attracted extensive +research interest. +In contrast to Shannon’s technical-level communication +framework, semantic communication can enhance perfor- +mance by exploiting prior knowledge between source and +destination [4], [19]. The semantic-based designs, however, are +not necessarily task-effective [20]. One can design transmitters +which compress the data with the least possible compromise +on the semantic meaning being transmitted [2], [3] while +the transmission can be task-unaware [21]. In contrast to +semantic level and technical level communication design, +the performance of a task-effective communication system is +ultimately measured in terms of the average return/cost linked +to the task [11]. In the (task-)effectiveness problem, we are +not concerned only about the communication of meaning but +arXiv:2301.01628v1 [cs.IT] 4 Jan 2023 + +2 +also about how the message exchange is helping the receiving +end to improve its performance in the expected cost/reward of +an estimation task [4], [13], [14], [16], [22] or a control task +[11], [12], [14], [17], [18], [23], [24]. +There are fundamental differences between the design of +task-effective communications for an estimation vs. a control +task - Fig. 1. (i) In the latter, each agent can produce a +control signal that directly affects the next observations of +the agent. Thus, in control tasks the source of information - +local observations of the agent - is often a stochastic process +with memory - e.g. linear or Markov decision processes - +[11], [17], [18]. In the estimation tasks, however, the source of +information is often assumed to be an i.i.d. stochastic process +[13], [16], [22]. (ii) In the control tasks, a control signal often +has a long-lasting effect on the state of the system more +than for a single stage/time step e.g., a control action can +result in lower expected rewards in the short run but higher +expected rewards in the long run. This makes the control tasks +intrinsically sensitive to the time horizon for which the control +policies are designed. Estimation tasks, specifically when the +observation process is i.i.d., can be solved in a single stage/ +time step - since there is no influence from the solution of one +stage/ time step to another i.e., each time step can be solved +separately [22], [25]. (iii) The cost function for estimation +tasks is often in the form of a difference/distortion function +while in the control tasks it can take on many other forms. +In this paper, we focus on the effectiveness problem for +the control tasks. In particular, we investigate the distributed +communication design of a multiagent system (MAS) with +the ultimate goal of maximizing the expected summation of +per-stage rewards also known as the expected return. Multiple +agents select control actions and communicate in the MAS +to accomplish a collaborative task with the help of a central +controller (CC) - i.e. the communication network topology of +the MAS is a star topology with the hub node being the central +controller and the peripheral nodes being the agents - Fig. 2. +The considered system architecture can find applications in +several domains such as Internet of Things, emerging cyber- +physical systems, real-time interactive systems, vehicle-to- +infrastructure communication [26] and collaborative percep- +tion [27]. +A. Related works: Task-effective communications for control +tasks +Authors in [11], [12], [14], [17], [18], [23], [24] consider +task-effective communication design under different settings. +While [12], utilizes the task-effective communication design +for the specific problem of the design of application-tailored +protocols over perfect communication channels, the communi- +cation channel is considered to be imperfect in [11], [14], [17], +[18], [23], [24]. Authors in [14] provide algorithmic contribu- +tions to the design of task-effective joint source channel coding +for single agent systems. Task-effective joint source and chan- +nel coding for MAS is targeted by [11], [14], [17], whereas +[18], [23] are focused on task-effective data compression and +quantization. Similar to the current paper, a star topology +18 +4th CET – Arsham Mostaani +New Results for the Centralized Architecture +Bit-budgeted Com. +Perfect Com. +a) +b) +Processing and +comm. power +Sensing, actuation, +comm. and +processing power +Sensing, comm. +and processing +power +Figure 2. Communication topology and its applicable scenarios a) Centralized +control of an MAS with collocated actuators and sensors, b) Distributed +sensing with a single controller collocated with a single actuator. The orange +dashed box is detailing the same box in Fig. 1 and Fig. 3 . +for the inter-agent communication is considered in [11], [12] +whereas [12] assumes perfect communications between the +hub node and the peripherals and [11] assumes imperfect +communication channels at the down-link of the peripheral +nodes. In contrast to all the above-mentioned work, this paper +is - to the best of our knowledge - the first to study the star +topology with the uplink (agent to hub) channel be imperfect +(bit-budgeted) - Fig. 2. Accordingly, each agent observes +the environment and communicates an abstract version of +its local observation to the CC via imperfect (bit-budgeted) +communication channels - red links in Fig. 2. Subsequently, +CC produces control actions that are communicated to the +agents via perfect communication channels - black links in +Fig. 2. The control actions are selected by the CC such +that they maximize the average return of the collaborative +task, where the return is a performance metric linked to the +accomplishment of the task. +B. Contributions +In our earlier work [18], we have developed a generic +framework to solve task-oriented communication problems - +for a multi-agent system (MAS) with full mesh connectivity. +The current work can be considered as an adoption of that +framework to a new problem setting for the design of task- +effective communications where agents follow a star network +topology for their connectivity. In this direction, the current +work transcends the applicability of the proposed framework +beyond the specific problem that was solved in [18] and +provides further insights into how the framework can be +used in wider terms and under a wider range of settings. In +particular the contributions of this work are listed below. +• Firstly, we consider a novel problem setting in which an +MAS is controlled via a central controller who has access +to agents’ local observations only through bit-budgeted +distributed communications. This problem setting can + +3 +be used in collaboration perception systems as well as +vehicle-to-infrastructure communications, which cannot +been addressed by the problem settings investigated in +the prior similar art. +• Secondly, our analytical studies establish the relationship +between the considered joint communication and con- +trol design problem and conventional data quantization +problems. In particular, lemma 1 shows how the problem +approached in this paper is a generalized version of the +conventional data quantization. This formulation is useful +as it helps to find an exact solution to the problem +under stronger conditions via ABSA-1 and under milder +conditions via ABSA-2. +• Moreover, our analytical studies help us to craft an indi- +rect 1 task-effective data quantization algorithm - ABSA- +2. Designing a task-effective data quantization for ABSA- +2 can equivalently be translated as an indirect approach +to feature selection for an arbitrary deep Q-network. +Relying on the analysis carried out for ABSA-1, ABSA- +2 designs distributed and bit-budgeted communications +between the agents and CC. ABSA-2 is seen to approach +optimal performance by increasing the memory of the +CC. In fact, increasing the memory of CC leads to higher +computational complexity. Therefore, ABSA-2 is said to +strike a trade-off between computational complexity and +task efficiency. +• Numerical experiments are carried out on a geomet- +ric consensus task to evaluate the performance of the +proposed schemes in terms of the optimality of the +MAS’s expected return in the task. ABSA-1 and ABSA- +2 are compared with several other benchmark schemes +introduced by [18], in a multi-agent2 scenario with local +observability and bit-budgeted communications. +• Finally, we will introduce a new metric, called task rele- +vant information, for the measurement of effectiveness in +task-oriented communication policies that - in compari- +son with the existing metrics such as positive listening +and positive signalling - better explains the behaviour +of a variety of task-effective communication schemes. +The proposed metric is capable of measuring the effec- +tiveness of a task-oriented communication/compression +policy without the need of testing a jointly designed +control policy and testing the jointly designed policies +in the desired task. +C. Technical approach +Our goal is to perform an efficient representation of the +agents’ local observations to ensure meeting the bit-budget +of the communication links while minimizing the effect of +1By an indirect algorithm here we mean an approach that is not dependent +on our knowledge from a particular task. Indirect approaches are applicable +to any/(wide range of) tasks. In contrast to indirect schemes, we have direct +schemes that are specifically designed for a niche application [16]. As defined +by [6]: ”the direct schemes aim at guaranteeing or improving the performance +of the cyber-physical system at a particular task by designing a task-tailored +communication strategy”. +2Due to the complexity related issues explained in section IV, the numerical +results are limited to two-agent and three-agent scenarios. +quantization on the average return of the task. To achieve +this, we first need to design task-effective data quantization +policies for all agents. In task-effective data quantization, +one needs to take into account the properties of the average +return function and the optimal control policies associated with +the task [15]. In addition to the design of the quantization +policies for all agents, we also need the control policy of +the CC to be capable of carrying out near-optimal decision- +making despite its mere access to the quantized messages - +resulting in a joint control and data compression problem. We +formulate the joint control and data compression problem as +a generalized form of data compression: task-oriented data +compression (TODC). Following this novel problem formula- +tion, we propose two indirect action-based state aggregation +algorithms (ABSA): (i) ABSA-1 provides analytical proof for +a task-effective quantization i.e, with optimal performance +in terms of the expected return. In this direction, ABSA-1 +relaxes the assumption of the lumpability of the underlying +MDP, according to which [18][condition. 6], the performance +guarantees of the proposed method were established. Since +ABSA-1 is only applicable when the system is composed of +one agent and the CC we also propose ABSA-2. Following +the analytical results of ABSA-1, given the help of MAP +estimation to relax the aforementioned limitation of ABSA-1, +and benefiting from a DQN controller at the CC; ABSA-2 will +be introduced as a more general approach. (ii) ABSA-2 solves +an approximated version of the TODC problem and carries +out the quantization for any number of agents communicating +with the CC. Thanks to a deep Q-network controller utilized +at the CC, ABSA-2 can solve more complex problems where +the controller benefits from a larger memory. Thus, ABSA-2 +allows trading complexity for communication efficiency and +vice versa. Finally, we will evaluate the performance of the +proposed schemes in the specific task: a geometric consensus +problem under finite observability [28]. +D. Organization +The rest of this paper is organized as follows. Section II +describes the MAS and states the joint control and commu- +nication problem. Section III proposes two action-based state +aggregation algorithms. Section IV shows the performance of +the proposed algorithms in a geometric consensus problem. +Finally, Section V concludes the paper. For the reader’s +convenience, a summary of the notation that we follow in this +paper is given in Table I. Bold font is used for matrices or +scalars which are random and their realizations follow simple +font. +II. SYSTEM MODEL AND PROBLEM STATEMENT +The problem setting we introduce here can be used to +analyse both scenarios illustrated in Fig. 2. Nevertheless, to +use our language consistently, we focus on scenario (a) of +that figure throughout the manuscript. In particular, when we +use the term ”agent” we refer to an object which certainly +has all the following hardware capabilities: sensing, actuation, +communication and data processing. A MAS, however, may +not be comprised of mere agents, but of a combination + +4 +of agents and perhaps other objects that has at least the +hardware capabilities for communication and data processing +power. The central controller here is supposed to have the +hardware capability to process relatively larger data as well as +the capability of communications. The interactions inside the +MAS and outside the MAS with the environment are illustrated +in Fig. 3. +A. System model +We consider a MAS in which multiple agents i ∈ N = +{1, 2, ..., N} collaboratively solve a task with the aid of a CC. +Following a centralized action policy, CC provides the agents +with their actions via a perfect communication channel while +it receives the observations of agents through an imperfect +communication channel 3. The considered setting is similar to +conventional centralized control of MASs [18], [30], except +for the fact that the communications from the agents to +the CC are transmitted over a bit-budgeted communication +channel. The agent-hub communications are considered to be +instantaneous and synchronous [18]. This is in contrast with +the delayed [17], [31] and sequential/iterative communication +models [32]–[34]. We note that there is no direct inter-agent +communication in the considered system - communications +occur only between agents and the central controller. The +system runs on discrete time steps t. The observation of each +agent i at time step t is shown by oi(t) ∈ Ω and the state +s(t) ∈ S of the system is defined by the joint observations +s(t) ≜ ⟨o1(t), . . . , oN(t)⟩4 . The control action of each agent +i at time t is shown by mi(t) ∈ M, and the action vector +m(t) ∈ MN of the system is defined by the joint actions +m(t) ≜ ⟨m1(t), ..., mN(t)⟩. The observation space Ω, state- +space S, and action space M are all discrete sets. The environ- +ment is governed by an underlying5 Markov Decision Process +3In this work we follow a common assumption used in the networked +control literature [29] according to which the bit-budget only limits the uplink +communications of the agents and not their downlink. Accordingly, the agents +select their control actions as is dictated to them by the central controller. +4According to this definition, at any given time t the observations of any +two agent i, j ∈ N are linearly independent in the Euclidean space. The same +conditions are true for the control actions of arbitrary agents. +5As defined in the literature [10], the underlying MDP’ is the horizon-T ′ +MDP defined by a hypothetical single agent that takes joint actions m(t) ∈ +MN and observes the nominal state s(t) ≜ ⟨o1(t), . . . , oN(t)⟩ that has +the same transition model T(·) and reward model r(·) as the environment +experienced by our MAS. +Symbol +Meaning +x(t) +A generic random variable generated at time t +x(t) +Realization of x(t) +X +Alphabet of x(t) +|X| +Cardinality of X +px +� +x(t) +� +Shorthand for Pr +� +x(t) = x(t) +� +H +� +x(t) +� +Information entropy of x(t) (bits) +X−x +X − {x} +Ep(x){x} +Expectation of the random variable X over the +probability distribution p(x) +tr(t) +Realization of the system’s trajectory at time t +Table I +TABLE OF NOTATIONS +𝑃𝑟 𝑠′ 𝑠, 𝑚) +Environment +Central Controller +𝜋1 +𝑐 +𝜋𝑚 +Agent 1 +Actuator +𝜋2 +𝑐 +Actuator +Agent N +𝑐1 +𝑐𝑁 +Channel +log2 |𝒞| ≤ 𝑅 +log2 |𝒞| ≤ 𝑅 +ǁ𝑐1 +ǁ𝑐𝑁 +𝑚1 +𝑚𝑁 +𝑚2 +𝑚1 +𝑜1 +𝑜2 +Channel +Figure 3. Illustration of the interactions of the CC and agents for the control +of the environment. The red link shows the communication channels that are +bit-budgeted - implying the local (and not global) observability of the CC. +The orange dashed box is detailing the same box in Fig. 1 and Fig. 2 . +that is described by the tuple M = +� +S, MN, r(·), γ, T(·) +� +, +where r(·) : S × MN → R is the per-stage reward function +and the scalar 0 ≤ γ ≤ 1 is the discount factor. The function +T(·) : S × MN × S → [0, 1] is a conditional probability mass +function (pmf) which represents state transitions such that +T +� +s(t + 1), s(t), m(t) +� += Pr +� +s(t + 1)|s(t), m(t) +� +. According +to the per-stage reward signals, the system’s return within the +time horizon T ′ is denoted by +g(t +′) = +�T ′ +t=t′ γt−1r +� +o1(t), ..., oN(t), m1(t), ..., mN(t) +� +. +(1) +While the system state is jointly observable by the agents +[35], each agent i’s observation oi(t) is local 6. Once per time +step, agent i ∈ N is allowed to transmit its local observations +through a communication message ci(t) to the CC. The +communications between agents and the central controller are +done in a synchronous (not sequential) and simultaneous (not +delayed) fashion [17]. Each agent i generates its communi- +cation message ci(t) by following its communication policy +πc +i (·) : Ω → C. In parallel to all other agents, agent i +follows the communication policy πc +i (·) to map its current +observation oi(t) to the communication message ci(t) which +will be received by the central controller in the same time- +step t. The code-book C is a set composed of a finite number +of communication code-words s c, c′, c′′, ..., c(|C|−1) - we use +the same notation to refer to the different members of the +action, observation and state spaces too. Agents’ communica- +tion messages are sent over an error-free finite-rate bit pipe, +with its rate constraint to be R ∈ R (bits per channel use) +or equivalently (bits per time step). As a result, the size +of the quantization codebook should follow the inequality +|C| ≤ 2R. The CC exploits the received communication +messages c(t) ≜ ⟨c1(t), ..., cN(t)⟩ within the last d number +of time-steps to generate the action signal m(t) following the +control policy πm(·) : CNd → MN. Based on the above +description, the environment from the point of view of the CC +6In our problem setting, each agent does not see the environment as an MDP +due to their local observability. We only assume the presence of an underlying +MDP for the environment, which is widely adopted in the literature for the +reinforcement learning algorithm, e.g., [36] [37]. We have this assumption as +our performance guarantees rely on the optimality of the solution provided +for the control task, which is also assumed in [7], [10]. Let us recall that +throughout all of our numerical studies, even the CC, given joint observations +of all agents, cannot observe the true/nominal state of the environment. + +5 +as well as from the agent’s point of view is not necessarily an +MDP - as none is capable of viewing the nominal state of the +environment. +B. Problem statement: Joint Control and Communication De- +sign (JCCD) problem +Now we define the JCCD problem. Let M be the MDP +governing the environment and the scalar R ∈ R to be the +bit-budget of the uplink of all agents. At any time step t′, +we aim at selecting the tuple π = ⟨πm(·), πc⟩ with πc ≜ +⟨πc +1(·), ..., πc +N(·)⟩ to solve the following variational dynamic +programming +argmax +π +Eπ +� +g(t′) +� +; +s.t. |C| ≤ 2R, +(2) +where +the +expectation +is +taken +over +the +joint +pmf +of +the +system’s +trajectory +{tr}T ′ +t′ += +o1(t′), ..., oN(t′), m(t′), ..., o1(T ′), ..., oN(T ′), m(T ′), +when +the agents follow the policy tuple π. In the next section, +similar to [18] we will disentangle the design of action +and communication policies via action-based quantization of +observations. In contrast to [18], here the communication +network of the MAS is assumed to follow a star topology. The +idea behind this disentanglement is to extract the features of +the control design problem that can affect the communication +design and to take them into account while designing the +communications. Thus our communication design will be +aware of the key features of the control task. We extract the +key features of the control task using analytical techniques +as well as reinforcement learning [17], [18]. In fact, the new +communication problem called TODC, will no longer be +similar to the conventional communication problems, as it is +inspired by the JCCD problem. +In [18], [23], authors use the value of agents’ observations +for the given task as the key feature of the control task +considered in the communication design. Accordingly, the idea +was to cluster together the observation points that have similar +values. In contrast to [18], [23], which considers the value of +observations as an explicit key feature of the control task, here +we consider the optimal control/action values assigned to each +observation as the key feature. Accordingly, ABSA clusters the +observation values together, whenever the observation points +have similar optimal control/action values assigned to them. +Action-based state aggregation has been already introduced in +the literature of reinforcement learning as a means for reducing +the complexity of the reinforcement learning algorithms while +maintaining the average return performance [38], [39]. +III. ACTION-BASED LOSSLESS COMPRESSION OF +OBSERVATIONS +In this section, we will set yet another example - in addition +to [18] - for the use of a generic framework to solve JCCD +problem. In [18], a similar problem is solved for distributed +control and quantization, wherein, the authors disentangle the +design of task-oriented communication policies and action +policies given the aid of a hypothetical functional Πm∗. In +particular, the functional Πm∗ is a map from the vector space +Kc of all possible communication policies πc to the vector +space Km of optimal corresponding control policy πm∗(·). +Upon the availability of the functional Πm∗, wherever the +function πm appears in the JCCD problem, it can be replaced +with Πm∗(πc) resulting in a novel problem in which only +the communication policies πc are to be designed. While in +[18], authors use an approximation of Πm∗(πc) to obtain a +task-oriented quantizer design problem, in the current work +we derive an exact solution for a simplified version of (3) - +where the number of agents communicating with the central +controller is limited to one agent. To adapt ABSA to the +generic setting of the problem (3), in ABSA-2, we will lift +this limitation given the aid of an approximation technique. +The JCCD problem can already be formulated as a form +of data-quantization problem. Lemma 1, identifies the quan- +tization metric that we aim to optimize in this paper. It +reformulates the JCCD problem as a novel generalized data +quantization problem. +Lemma 1. The JCCD problem (2) can also be expressed as +a generalized data quantization problem as follows +argmin +π +Ep(s(t)) +���V π∗� +s(t) +� +− V πm� +c(t) +����, +s.t. |C| ≤ 2R, (3) +where the communication vector c(t) generated by πc is a +quantized version of the system’s state s(t). +Proof. Appendix A. +■ +In contrast to the classic data-quantization problems, here +the distortion metric, measures the difference between two dif- +ferent functions of the original signal and its quantized version +- namely V π∗(·) and V πm(·) - thus the distortion measure that +we aim to optimize by solving (3) is not conventional. In fact, +the variational minimization problem is solved over the vector +space of joint quantization policies πc and action policy πm +functions. +A. ABSA-1 Algorithm +The applicability of the proposed ABSA-1, is limited to two +mathematically equivalent scenarios: (i) we have a single agent +communicating to the CC - consider the Fig. 2-a, with only one +agent connected to the CC - or (ii) that the agents communicate +with the CC through a relay. In the latter scenario, the relay +has full access to the agents’ communication observation, i.e., +oi, ∀i ∈ N, while the relay to CC channel is bit-budgeted. +This limited scenario is useful for us to facilitate our analytical +studies on the problem (3), allowing us to establish theoretical +proof for the losslessness of compression in ABSA-1 as well +as its optimal average return performance. These statements +will be confirmed by Lemma +2 - the results of which +will also be useful to design ABSA-2. The central idea of +ABSA-1 is to represent any two states s(i), s(j) using the +same communication message c iff π∗� +s(i)� += π∗� +s(j)� +, +where π∗(·) : S → MN is the optimal control policy of +the agents, given the access of observations from all agents. +Thus, ABSA-1 and ABSA-2 solve the JCCD problem at three +different phases: (i) solving the centralized control problem +under perfect communications via reinforcement learning i.e., + +6 +Q-learning, to find π∗(·)7, (ii) solving the task-oriented data +quantization problem to find πc via a form of data clustering, +(iii) finding the πm corresponding to πc. +In order to explain ABSA-1, we introduce the problem of +task-oriented data compression with centralized control. TBIC +is derived using similar techniques in [18] but for a different +setting i.e., the communication network of MAS has a star +topology. The TBIC problem is no longer a joint control and +communication problem but is a quantization design problem +in which the features of the control problem are taken into +account. To arrive to TODC problem from the JCCD problem, +we use the functional Πm∗ to replace πm(·) with Πm∗� +πc� +. +Upon the availability of Πm∗, by plugging it into the JCCD +problem (2), we will have a new problem +argmin +πc +Ep(s(t)) +���V π∗� +s(t) +� +− V Πm∗� +πc�� +c(t) +����, +s.t. |C| ≤ 2R, +(4) +where we maximize the system’s return with respect to +only the communication policies πc(·) of the local relay. The +optimal control policy πm∗(·) of the CC is automatically +computed by the mapping Πm∗� +πc(·) +� +. The problem is called +here as the TODC problem. Upon the availability of Πm∗, +the JCCD problem (2) can be reduced to (4). Definition 1 +is provided to formalize a precise approach to solve (4) via +obtaining the communication policy of the relay πc(·) as well +as the corresponding Πm∗, to solve (2). +Definition 1. Quantization and control policies in ABSA-1: +The communication policy πc,ABSA−1(·) designed by +ABSA-1 will be obtained by solving the following k-median +clustering problem +min +P +�|C| +i=1 +� +s(t)∈Pi +���π∗� +s(t) +� +− µi +���, +(5) +where P = {P1, ..., PB} is a partition of S and µi is the +centroid of each cluster i. The communication policy of ABSA- +1 - πc,ABSA−1(·) - is an arbitrary non-injective mapping such +that ∀k ∈ {1, ..., B} : πc,ABSA−1(s) = c(k) if and only if +s ∈ Pk. Now let Cg be a function composition operator such +that Cgf = g ◦ f. We define the operator Πm∗ ≜ Cg, with +g = π∗� +πc,ABSA−1−1(·) +�8 . +The optimality of the proposed ABSA-1 algorithm is sub- +sequently provided in Theorem 2. +Lemma 2. The communication policy πc,ABSA−1 - as de- +scribed by Definition 1 - will carry out lossless compression +of observation data w.r.t. the average return if |C| ≥ |M|N. +Proof. Appendix B. +■ +Remark: ABSA-1 will also carry out lossless compression +of observation data with respect to the distortion measure +7ABSA’s bottleneck arises from the increasing complexity of Q-learning as +agents increase in number N. Similar limitations are in place for any other +algorithm that requires a centralized training phase [7], [30] +8Note that as πc,ABSA−1(·) is non-injective, its inverse would not produce +a unique output given any input. Thus, by π∗� +πc,ABSA−1−1(c′) +� +we mean +π∗� +s′� +, where s′ can be any arbitrary output of πc,ABSA−1−1(c′). +introduced in problem (3). Given the proofs of lemma 2 and +lemma 1, the proof of this remark is straightforward and is +therefore, omitted. +The losslessness of quantization in ABSA-1 implies that +the πABSA−1 will result in no loss of the system’s average +return, compared with the case where the optimal policy π∗(·) +is used to control the MAS under perfect communications. +Consequently, the control policy πm,ABSA−1(·) is optimal. Let +us recall once again that here, we do not use a conventional +quantization distortion metric, we select a representation of +local observation in such a way that the conveyed message +maximizes the average task return. +Note that in [7], the authors do not find the higher order +function Πm∗ that reduces the joint communications and +control problem to a task-oriented communication design - +instead they solve an approximated version of the task-oriented +communication design problem. In this paper, however, we +introduce a closed form Πm∗ by ABSA-1 that can map every +communication policy πc,ABSA−1 introduced by ABSA-1, to +the exact optimal control policy. This implies that the solutions +provided by ABSA-1 are also the optimal solutions of the joint +communication and control design (JCCD) problem. +B. ABSA-2 Algorithm +We saw earlier in lemma 2 that the communication policy +obtained by solving the problem 5 is optimal and can result +in a lossless average return performance when |C| ≥ |M|N. +To solve the problem 5, however, we need to know π∗� +s(t) +� +. +This is a limiting assumption that in ABSA-1 can be translated +to two different system models which are less general than +the system pictured in Fig. 3: (i) presence of an extra relay +between the agents and the central controller where the relay +has perfect downlink channels to agents and a single bit- +budgeted channel to the CC. (ii) The MAS is only composed +of one single agent and a CC where the uplink of the agent +is bit-budgeted but its downlink is a perfect channel. +Our second proposed algorithm ABSA-2 removes the need +to know π∗� +s(t) +� +and can run under the more general settings +shown in Fig. 3. This is done by approximating the local +element m∗ +i (t) of π∗� +s(t) +� += ⟨m1 ∗ (t), ..., mN ∗ (t)⟩ at +agent agent i given the local observation of this agent oi(t). +That is, given a centralized training phase, we will have +access to the empirical joint distribution of p(oi, m∗ +i ), using +which we can obtain a numerical MAP estimator of +ˆ +m∗i. +Thus ABSA-2 allows for fully distributed communication +policies. In particular, the encoding of the communication +messages of each agent is carried out separately by them +before they communicate with CC or any other agent. This +form of encoding is often referred to as distributed encoding. +Furthermore, the encoding carried out by ABSA-2 at each +agent is a low-complexity and low-power process that requires +no inter-agent communications before hands. In this case, each +agent directly communicates its encoded observations to the +CC via a bit-budgeted communication channel. In order to +improve the learning efficiency at CC, it can take into account +all the communications received in the time frame [t − d, t] +to make a control decision m(t). Therefore, the ABSA-2 + +7 +෤𝜋𝑖 +∗ ⋅ : Ω → ℳ +Ω +Ω ⊂ ෑ +𝑖=1 +𝑁 +ℝ +Ω × ℳ +Clustering observation points +over their action values +𝒫i,1 +𝒫𝑖,2 +𝒫𝑖,3 +ABSA-2 +Figure 4. +Abstract representation of states in ABSA-2 with |C| = 3 and |M| = 5 - |M| is represented by the number of shapes selected to show +the observation points and |C| is represented by the number of clusters shown in the right subplot. The left subplot shows the observation points prior to +aggregation. During a centralized training phase we first compute π∗(·) according to which π∗ +i (·) : Ω → M can be obtained. We use the surjection π∗ +i (·) +to map a high dimensional/precision observation space to a low dimensional/precision space. The middle subplot shows the observation points together with +the action values assigned to them - each unique shape represents a unique action value. This new representation of the observation points, embeds the +features of the control problem into the data quantization problem. Finally, we carry out the clustering of observation points according to their action +values - all observation points assigned to (a set of) action values are clustered together. The right subplot shows the aggregated observation space, where +all the observation points in each cluster will be represented using the same communication message. The centralized controller which is run using DQN, +observes the environment at each time step, through all these aggregated observations/communications it receives from all the agents. +algorithm can strike a trade-off between the complexity of +the computations carried out at the CC - directly impacted by +the value of d - and effectiveness of agents’ communications +- inversely impacted by the value of |C|. Moreover, ABSA-2 +is straightforwardly extendable to the different values of |C| +per each agent i, instead of having only one fixed bit-budget +R = log2 |C| for all agents. +As illustrated in Fig. 4, ABSA-2, each agent i obtains a +communication policy function πc +i (·) by solving a clustering +problem over its local observation space instead of the global +state space, formulated as follows: +min +Pi +�|C| +j=1 +� +oi(t)∈Pi,j +���˜π∗ +i (oi(t)) − µi,j +���, +(6) +where Pi = {Pi,1, ..., Pi,|C|} is a partition of Ω, and +˜π∗ +i (oi(t)) = argmaxm∗ +i pπ∗(m∗ +i |oi(t)), +(7) +and m∗ +i is the optimal action of agent i, which is i-th +element of m∗ ≜ π∗� +o1(t), ..., oN(t) +� +. Thus ˜π∗ +i (oi(t)) is the +maximum aposteriori estimator of m∗ +i = π∗� +s(t) +� +given the +local observation oi(t). +Once the clustering in (6) is done, each agent i will +train its local communication policy πc,ABSA−2 +i +(·), which +is any non-injective mapping such that ∀k ∈ {1, ..., |C|} : +πc,ABSA−2 +i +(oi) = c(k) iff oi ∈ Pi,k. After obtaining the +communication policies ⟨πc,ABSA−2 +i +(·)⟩N +i=1, to obtain a proper +control πm(·) policy at the CC corresponding to the com- +munication policies, we perform a single-agent reinforcement +learning. To this end and to manage the complexity of the +algorithm for larger values of d, we propose to use DQN +architecture [41] at the CC. +IV. PERFORMANCE EVALUATION +In this section, we evaluate our proposed schemes via nu- +merical results for the popular multi-agent geometric consen- +Algorithm 1. +Action Based State Aggregation (ABSA-2) +1: Initialize replay memory D to capacity 10’000. +2: Initialize state-action value function Q(·) with random +weights θ. +3: Initialize target state-action value function Qt(·) with +weights θt = θ. +4: Obtain π∗(·) and Q∗(·) by solving (2) using Q-learning +[40]*, where R >> H(oi(t)) ∀i ∈ N. +5: Compute π∗ +i (oi(t)) = Mode +� +m∗ +i |oi(t) +� +, for ∀oi(t) ∈ Ω, +for i ∈ N. +6: Solve problem (5) by applying k-median clustering to +obtain Pi and πc +i (·) , for i ∈ N. +7: for each episode k = 1 : 200’000 do +8: +Randomly initialize observation oi(t = 0), for i ∈ N +9: +Randomly initialize the message c(t = 0) +10: +for t = 1 : T ′ do +11: +Select ci(t), at agent i, following πc +i (·), for i ∈ N +12: +Obtain the message ⟨c1(t), ..., cN(t)⟩ at the CC +13: +Follow ϵ-greedy, at CC, to generate the action +mi(t), for i ∈ N +14: +Obtain reward r(t) = R +� +s(t), m(t) +� +at the CC +15: +Store the transition +� +c(t), m(t), r(t), c(t + 1) +� +in D +16: +t ← t + 1 +17: +end +18: +Sample D′ = +� +c(t′), m(t′), r(t′), c(t′ +1) +�t′=t′ +62 +t′=t′ +1 +from D +19: +for each transition t′ = t′ +1 : t′ +62 of the mini-batch D′ do +20: +Compute DQN’s average loss Lt′(θ) = +1 +2 +� +r(t′) + +max +m∗ Qt� +c(t′ + 1), m∗, θt� +− max +m∗ Q +� +c(t′), m∗, θ +��2 +, +21: +Perform a gradient descent step on Lt′(θ) w.r.t θ +22: +end +23: +Update the target network Qt(·) every 1000 steps +24: end + +8 +sus problem9. Through indirect design, ABSA-1 and ABSA- +2 never rely on explicit domain knowledge about any spe- +cific task, such as geometric consensus. Thus, we conjecture +that their indirect design allows them to be applied beyond +geometric consensus problems and to a much wider range +of tasks. To make the geometric consensus task suitable for +the evaluation of our proposed algorithms, similar to [18], +we have introduced a bit constraint to the communication +channel between the agents and the CC. After evaluating the +proposed algorithms in the context of the rendezvous problem, +we attempt to explain the behaviour of all the algorithms via +the existing metric - positive listening - for measuring the task- +effectiveness of communications. As positive listening falls +short in explaining all the aspects of the behaviour of the +investigated algorithms, we will also introduce a new metric. +Called task relative information, the new metric assists to +further explain the behaviour of different algorithms with a +higher accuracy and reliability. +A. The geometric consensus problem +Our proposed schemes are evaluated in this section through +numerical results for the rendezvous problem [42], [43], +which is a specific type of geometric consensus problems +under finite observability [28]. Following the instantaneous +and synchronous communication model and the star network +topology explained in section II-A and Fig. 2 respectively, +the rendezvous problem is explained as the following. At +each time step t several events happen in the following order. +First, an agent i obtains a local observation oi(t) - which is +equivalent to its own location in the grid-world. The agent i, +subsequently, follows its quantization/communication policy +to generate a compressed version ci(t) of its observation to +be communicated to the CC via bit-budgeted communication +links. After receiving the quantized observations of all agents, +CC follows its control policy to decide and select the joint +action vector m(t) and communicate each agent i’s local action +mi(t) to it accordingly. The local action mi(t) ∈ M that is +communicated back to the agent i via a perfect communication +channel is a one directional move in the greed world, i.e, +M = { left, right, up, down, pause}. Given each agent i’s +action mi(t) the environment evolves and transitions to the +next time step t + 1 where each agent i obtains a new local +observation oi(t + 1). All agents receive a single team reward +rt = +� +� +� +� +� +C1, +if ∃ i, j ∈ N : oi(t) ∈ ΩT & oj(t) /∈ ΩT +C2, +if ∄ i ∈ N : oi(t) ∈ Ω − ΩT , +0, +otherwise, +(8) +where C1 < C2 and ΩT is the set of terminal observations i.e., +the episode terminates if ∃ i ∈ N : oi(t) ∈ ΩT . Accordingly, +when not all agents arrive at the target point, a smaller reward +C1 = 1 is obtained, while the larger reward C2 = 10 is +attained when all agents visit the goal point at the same time. +9In our numerical experiments, the discount factor is assumed to be γ = +0.9. All experiments are done over a grid world of size 8×8, where the goal +point of the rendezvous is located at the grid number ΩT = {22}. +We compare our proposed ABSA algorithms with the heuristic +non-communicative (HNC), heuristic optimal communication +(HOC) and SAIC algorithms proposed in [18] which are direct +schemes to jointly design the communication and control +policies for the specific geometric consensus problem solved +here. In contrast to ABSA-1 and ABSA-2 which enjoy an +indirect design, the direct design of HOC and HNC does +not allow them to be applied in any other problem rather +than the specific geometric consensus problem with the finite +observability i.e., the rendezvous problem explained here. +B. Numerical experiment +A constant learning rate α = 0.07 is applied when exact Q- +learning is used to obtain π∗(·) and α = 0.0007 when DQN +is used to learn πm(·) for ABSA-2. For the exact Q-learning, +a UCB10 exploration rate of c = 1.25 considered. The deep +neural network that approximates the Q-values is considered +to be a fully connected feed-forward network with 10 layers +of depth, which is optimized using the Adam optimizer. An +experience reply buffer of size 10’000 is used with the mini- +batch size of 62. The target Q-network is updated every 1000 +steps and for the exploration, decaying ϵ-greedy with the initial +ϵ = 0.05 and final ϵ = 0.005 is used [41]. In any figure +that the performance of each scheme is reported in terms +of the averaged discounted cumulative rewards, the attained +rewards throughout training iterations are smoothed using a +moving average filter of memory equal to 20,000 iterations. +As explained in section III-A, ABSA-1 and ABSA-2 both +require a centralized training phase prior to be capable of being +executed in a distributed fashion. +For all black curves, one prior centralized training phase +to obtain π∗(·) is required. As detailed in Section III, the +proposed algorithms, ABSA-1 and ABSA-2, leverage π∗(·) +to design πc and then πm afterwards. Dashed curves, HOC +and HNC, as proposed by [18] provide heuristic schemes +which exploit the domain knowledge of its designer about +the rendezvous task making it not applicable to any other +task rather than the rendezvous problem. While HOC enjoys +a joint control and communication design, HNC runs with no +communication. Note that HNC & HOC require communica- +tion/coordination between agents prior to the starting point of +the task - which is not required for any other scheme. These +schemes, introduced by [18], are detailed as the following. +• A joint communication and control policy is designed +using domain knowledge in the rendezvous problem. +HNC agents approach the goal point and wait nearby +for a sufficient number of time steps to ensure that the +other agent has also arrived. Only after that, they will +get to the goal point. Note that this scheme requires +communication/coordination between agents prior to the +starting point of the task, since they have to have had +agreed upon this scheme of coordination. +• A joint communication and control policy is designed +using domain knowledge in the rendezvous problem. +10UCB is a standard scheme used in exact reinforcement learning to strike +a trade-off between the exploration and exploitation [40]. + +9 +0 +2 +4 +6 +8 +10 +12 +14 +16 +18 +Training Iterations +104 +0 +0.5 +1 +1.5 +2 +2.5 +3 +3.5 +4 +4.5 +5 +Average Return +Figure 5. +Average return comparison made between the proposed schemes +and some benchmarks introduced in [18] - the three agent scenario under +constant bit-budget values. +HOC agents wait next to the goal point until the other +agent informs them that they have also arrived there. +Only after that, they will get to the goal point. Note +that this scheme requires communication/coordination +between agents prior to the starting point of the task, +since they have to have had agreed upon this scheme of +coordination and communications as well as on the the +meaning that each communication message entails. +To obtain the results demonstrated in Fig. 5, we have +simulated the rendezvous problem for a three-agent system. +The black curves illustrate the training phase that is occurring +at CC to obtain πm after πc is already computed using +equations (5) and (6). We observe the lossless performance +of ABSA-1 in achieving the optimal average return without +requiring any (2nd round) training. To enable fully decen- +tralized quantization of the observation process, ABSA-2 was +proposed which is seen to approach the optimal solution as +d grows. All ABSA-2 curves are plotted with |C| = 3, and +ABSA-1 curve is plotted with |C| = |M|N = 125 in 3 agent +scenarios - Fig. 5 - and |C| = |M|N = 25 in the two agent +scenario - Fig. 6. +In Fig. 5, we see how the performance of ABSA-2 compares +with HNC, HOC and SAIC at different rates of quantization. +As expected, with the increase in the size of the quantiza- +tion codebook, the average return performance of ABSA-2 +is gradually improved, such that it approaches near-optimal +performance at d = 3. We also observe the superior per- +formance of ABSA-2 compared with SAIC at very tight bit- +budgets where SAIC’s performance sees a drastic drop. It is +observed that as d grows, ABSA-2 approaches the optimal +return performance even under higher rates of quantization, +however, higher values of d come at the cost of the increased +computational complexity of ABSA-2. +C. Explainablity of the learned communication policies +One common metric to evaluate the effectiveness of +communications in the literature [37] is positive listening +I +� +ci(t); mj(t) +� +j ∈ N −{i}, which is the mutual information +2 +2.5 +3 +3.5 +4 +4.5 +5 +Size of the quantization codebook +0.2 +0.4 +0.6 +0.8 +1 +Normalized Average Return +Figure 6. +The obtained normalized average return as a function of codebook +size |C| is compared across a range of schemes: proposed schemes and some +benchmarks introduced in [18] - two-agent scenario. +between the communication ci(t) produced by an agent i +and the action mj(t) selected by another agent following +the receipt of the communication ci(t) from agent i. Positive +signaling I +� +oi(t); ci(t) +� +is another metric proposed by [37], +measuring the mutual information between agent i’s observa- +tion oi(t) and its own produced communication message ci(t) +at the same time step. As to be shown below, however, these +metrics are unable to fully capture the underlying performance +trends of all schemes. Therefore, we, for the first time, +introduce a new metric called task relevant information (RI) +- allowing us to explain the task-effectiveness of the learned +communication policies. +Measuring positive listening is one way to quantify the +contribution of the communicated messages of agent i to the +action selection of agent j. Positive signalling, on the other +hand, measures the consistency as well as the relevance of the +communicated messages ci(t) and the agent’s observations +oi(t). As SAIC and ABSA use a deterministic mapping of +observation oi to produce the communication message ci, they +are always guaranteed to have positive signalling [37] - the +degree of which, however, is limited by the uplink channel’s bit +budget R = log2 |C|. Thus, among the existing metrics for the +measurement of the effectiveness of communications, we limit +our numerical studies to the measurement of positive listening. +It is known that the higher positive listening is, the stronger +(not necessarily better) we expect the coordination between +the agents to be. That is, the higher positive listening means +higher degree of dependence between agents (their actions and +observations) which is not necessarily sufficient for the team +agents to fulfill the task. +Figure 7 explains how stronger coordination between agents +and the CC is often resulting in an increased performance of +the MAS in obtaining a higher average return. For instance, +the enhancement in the positive-listening performance of SAIC +from |C| = 3 to |C| = 4 quantizer in Fig. 7 is resulting in an +improved average return performance, as shown in Fig. 6. This +metric also reasonably explains the enhancement of ABSA-2 +performance in obtaining higher return by increasing d - the +memory of the CC - and the size of the quantization codebook + +10 +2 +2.5 +3 +3.5 +4 +4.5 +5 +Size of the quantization codebook +0.1 +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +Positive Listening (bits) +Figure 7. +Comparing the positive listening I +� +ci(t); mj(t) +� +performance +across a range of schemes. +|C|. Moreover, stronger coordination between agents and CC +is visible in ABSA-2 when compared with HOC. Thus, we +expect better average return performance for ABSA-2 which +is in contrast to the results of Fig. 5. This event suggests +that stronger coordination - measured by positive listening +- may not necessarily result in an improved average return +performance as the coordination may not be perfectly aligned +with task needs. +The curve concerning the HOC scheme allows us to recall +that a positive listening of 0.3 (bit) is sufficient to maintain +the coordination required for optimal performance in the afore- +mentioned geometric consensus task. Therefore, in the ABSA- +2 and SAIC schemes, there is still an unnecessary influence +from the side of the communication messages to the actions +selected by the receiving end. In fact, not all the information +received from the receiving end has contributed to the higher +average return of the system. Accordingly, there is yet, some +unnecessary data in the communication messages designed by +ABSA that contain no task-specific/useful information. +Thus we believe that positive listening cannot explicitly +quantify the effectiveness of the task-oriented communica- +tion algorithms; therefore they fall short in explaining the +behaviour of these algorithms. Even when positive listening is +computed as I (ci(t); m(t)) to capture the mutual information +between the communication of agent i and the control signals +of all agents we arrive at almost similar patterns - Fig. 8. +Figure 9, investigates the performance of multiple schemes +via a novel performance metric: task relevant information +(TRI). Here we define the task relevant information metric +to be +I +� +πc� +oi(t) +� +; π∗� +s(t) +�� += I +� +ci(t); m∗(t) +� +, +(9) +which measures the mutual information (in bits) between the +communicated message of agent i and the vector m∗(t) of +joint optimal actions at the CC - which is selected by the +optimal centralized control policy π∗(·). As demonstrated +by Fig. 9, TRI is an indirect metric of the effectiveness of +communications that can explain the behaviour of different +2 +2.5 +3 +3.5 +4 +4.5 +5 +Size of the quantization codebook +0.2 +0.4 +0.6 +0.8 +1 +1.2 +Positive Listening (bits) +Figure 8. +Comparing the positive listening I (ci(t); m(t)) performance +across a range of schemes. +communication designs. It is also observed that the TRI metric +magnifies the performance gap between different schemes +as they get closer to the optimal performance. Nevertheless, +TRI can be utilized as a standalone measure to quantify +the effectiveness of a communication design since it almost +perfectly predicts the average return performance of the a com- +munication policy - without the need for the communication +to be tested when solving the real task. +Note that, we measure the task-effectiveness of a quan- +tization algorithm based on the average return that can be +obtained when using it. Further, to measure the average +return that can be obtained under the communication poli- +cies ⟨πc +1(·), ..., πc +N(·)⟩, we have to design the control policy +πm(·) at the CC that selects the control vector m(t) having +access to only the quantized observations of the agents c(t). +Accordingly, we cannot measure the effectiveness of the +communication policy of an MAS without having a specific +design for their control policy. Even after the design of the +control policy of the MAS, it is challenging to understand if +the suboptimal performance of the algorithm is caused by an +ineffective design of the control policy or the communication +policy. In fact, it is hard disentangle the effect of the control +and communication policies on the MAS’s average return. Our +proposed metric TRI can facilitate measuring the performance +of any communication policy in isolation and without the +effect of the control policy being present in the numerical +values of TRI. +Accordingly, the importance of introducing this metric is +multi-fold: (i) by using TRI as an indirect metric we can +measure the effectiveness of a communication policy for any +specific task; (ii) it allows us to measure the effectiveness of +the communication scheme prior to the design of any control +policy; (iii) it helps to design task effective communication +policies in complete separation from the control policy design. +V. CONCLUSION +In this paper, we have investigated the joint design of control +and communications in an MAS under centralized control +and distributed communication policies. We first proposed +an action-based state aggregation algorithm (ABSA-1) for + +11 +2 +2.5 +3 +3.5 +4 +4.5 +5 +Number of Communication Symbols +0 +0.1 +0.2 +0.3 +0.4 +0.5 +ABSA-2 d=1 +ABSA-2 d=2 +ABSA-2 d=3 +SAIC d=1 +HOC d=1 +Figure 9. Comparing the task relevant information (TRI) performance across +a range of schemes. It is observed that TRI can comprehensively explain the +behaviour of all task-effective quantization schemes in a certain task without +the need to measure their effectiveness via their resulting average return in +the task - compare this figure with Fig. 6 . +lossless compression and provided analytical proof of its +optimality. Then we proposed ABSA-2, which offers a fully +distributed communication policy and can trade computational +complexity for communication efficiency. We finally demon- +strated the task-effectiveness of the proposed algorithms via +numerical experiments performed on a geometric consensus +problem via a number of representative metrics. Furthermore, +our numerical studies demonstrate the pressing need for further +research on finding a metric that can measure/explain the +task-effectiveness of communications with more accuracy. +And, scalability in task-oriented design is yet another central +challenge to be addressed in future research. +APPENDIX A +PROOF OF LEMMA 1 +Proof. Applying Adam’s law on equation (2) yields +argmax +π +Ep(c(t)) +� +Epπc,πm ({tr}T ′ +t′ |c(t)) +� +g(t′)|c(t) +�� +, s.t. |C| ≤ 2R +(10) +where c(t) is generated by the communication policy πc and +the joint pmf of the system’s trajectory {tr}T ′ +t′ +is directly +influenced by the action policy πm. The conditional pmf +pπc,πm({tr}T ′ +t′ |c(t)) is the joint probability of the trajectory +of the system given the received communication c(t) when +policies πc(·) and πm(·) are followed. We proceed by negating +the equation (10) and adding a second term to the objective +function which is constant with respect to the decision vari- +ables of the problem to have +argmin +πc +Ep(s(t)) +� +Epπ∗ ({tr}T ′ +t′ |s(t)) +� +g(t′)|s(t) +�� +− +(11) +Ep(c(t)) +� +Epπc,πm ({tr}T ′ +t′ |c(t)) +� +g(t′)|c(t) +�� +, s.t. |C| ≤ 2R. +We replace the conditional expectation of system return by +the value function V (·), [40](Ch. 3.5), and we will have +argmin +πc +Ep(s(t)) +� +V π∗� +s(t) +�� +− Ep(c(t)) +� +V πm� +c(t) +�� +, +s.t. +|C| ≤ 2R. +(12) +Note that the empirical joint distribution of c(t) can be +obtained by following the communication policy πc on the +empirical distribution of s(t). +argmin +πc +Ep(s(t)) +� +V π∗� +s(t) +�� +− Ep(s(t)) +� +V πm� +c(t) +�� +, +s.t. +|C| ≤ 2R. +(13) +As V π∗� +s(t) +� +− V πm� +c(t) +� +≥ 0 is true for any s(t) ∈ S, +merging the two expectations results in +argmin +πc +Ep(s(t)) +���V π∗� +s(t) +� +− V πm� +c(t) +����, +s.t. |C| ≤ 2R, +(14) +which concludes the proof of the lemma. +■ +APPENDIX B +PROOF OF LEMMA 2 +Proof. We depart from the result of lemma 1 - problem (3). +By taking the expectation over the empirical distribution of +s(t) and applying Bellman optimality equation, we obtain +argmin +π +1 +n +n +� +t=1 +���Qπ∗� +s(t), π∗(s(t)) +� +−Qπm� +c(t), πm� +πc(s(t)) +�����, +s.t. +|C| ≤ 2R, +(15) +where the vector πc(s(t)) is of N dimensions and its i-th +element is ci(t). We proceed by plugging πc,ABSA−1(·) and +Πm∗, according to the definition 1, into the equation (15) to +obtain +1 +n +n +� +t=1 +���Qπ∗� +s(t), π∗(s(t)) +� +− Qπ∗� +c(t), π∗� +s′�����, +(16) +where s′ = πc,ABSA−1−1� +πc,ABSA−1� +s(t) +�� +, and any pos- +sible value for it lies in the same subset Pk′ as s(t) does, while +according to the definition of Pk′, we know π∗(s(t)) = π∗(s′), +if |C| ≥ |M|N. Thus, by replacing π∗(s′) in with π∗(s(t)) in +equation (17) we get +1 +n +n +� +t=1 +���Qπ∗� +s(t), π∗(s(t)) +� +− Qπ∗� +s(t), π∗� +s(t) +����� = 0. +(17) +This concludes the proof of theorem 2. +■ +REFERENCES +[1] L. S. Vailshery, “Number of internet of things (iot) connected devices +worldwide from 2019 to 2021, with forecasts from 2022 to 2030,” Aug +2022. [Online]. Available: https://www.statista.com/statistics/1183457/ +iot-connected-devices-worldwide/ +[2] B. G¨uler, A. Yener, and A. Swami, “The semantic communication +game,” IEEE Transactions on Cognitive Communications and Network- +ing, vol. 4, no. 4, pp. 787–802, 2018. +[3] H. Tong, Z. Yang, S. Wang, Y. Hu, W. Saad, and C. Yin, “Federated +learning based audio semantic communication over wireless networks,” +in 2021 IEEE Global Communications Conference (GLOBECOM), +2021, pp. 1–6. +[4] N. Pappas and M. Kountouris, “Goal-oriented communication for real- +time tracking in autonomous systems,” in 2021 IEEE International +Conference on Autonomous Systems (ICAS), 2021, pp. 1–5. +[5] E. Calvanese Strinati and S. Barbarossa, “6g networks: Beyond shannon +towards semantic and goal-oriented communications,” Computer Net- +works, vol. 190, p. 107930, 2021. +[6] A. Mostaani, T. X. Vu, S. K. Sharma, Q. Liao, and S. Chatzinotas, +“Task-oriented communication system design in cyber-physical systems: +A survey on theory and applications,” arXiv preprint arXiv:2102.07166, +2021. + +12 +[7] J. Foerster, Y. Assael, N. de Freitas, and S. Whiteson, “Learning to +communicate with deep multi-agent reinforcement learning,” in Proc. +Advances in Neural Information Processing Systems, Barcelona, 2016. +[8] C. E. Shannon and W. Weaver, “The mathematical theory of communi- +cation [1949]. urbana, il,” 1959. +[9] L. Hu, G. Wu, Y. Xing, and F. Wang, “Things2vec: Semantic modeling in +the internet of things with graph representation learning,” IEEE Internet +of Things Journal, vol. 7, no. 3, pp. 1939–1948, 2020. +[10] J. Cai, W. Zhong, and J. Luo, “Seminer: Side-information-based seman- +tics miner for proprietary industrial control protocols,” IEEE Internet of +Things Journal, vol. 9, no. 22, pp. 22 796–22 810, 2022. +[11] T.-Y. Tung, S. Kobus, J. P. Roig, and D. G¨und¨uz, “Effective communi- +cations: A joint learning and communication framework for multi-agent +reinforcement learning over noisy channels,” IEEE Journal on Selected +Areas in Communications, vol. 39, no. 8, pp. 2590–2603, 2021. +[12] M. P. Mota, A. Valcarce, J.-M. Gorce, and J. Hoydis, “The emergence of +wireless mac protocols with multi-agent reinforcement learning,” arXiv +preprint arXiv:2108.07144, 2021. +[13] N. Shlezinger and Y. C. Eldar, “Deep task-based quantization,” Entropy, +vol. 23, no. 1, p. 104, 2021. +[14] M. A. Gutierrez-Estevez, Y. Wu, and C. Zhou, “Learning to commu- +nicate with intent: An introduction,” arXiv preprint arXiv:2211.09613, +2022. +[15] C. Zhang, H. Zou, S. Lasaulce, W. Saad, M. Kountouris, and M. Bennis, +“Goal-oriented communications for the iot and application to data +compression,” arXiv preprint arXiv:2211.05378, 2022. +[16] N. Shlezinger and Y. C. Eldar, “Task-based quantization with application +to mimo receivers,” arXiv preprint arXiv:2002.04290, 2020. +[17] A. Mostaani, O. Simeone, S. Chatzinotas, and B. Ottersten, “Learning- +based physical layer communications for multiagent collaboration,” +in 2019 IEEE Intl. Symp. on Personal, Indoor and Mobile Radio +Communications, Sep. 2019. +[18] A. Mostaani, T. X. Vu, S. Chatzinotas, and B. Ottersten, “Task-oriented +data compression for multi-agent communications over bit-budgeted +channels,” IEEE Open Journal of the Communications Society, vol. 3, +pp. 1867–1886, 2022. +[19] M. Kountouris and N. Pappas, “Semantics-empowered communication +for networked intelligent systems,” IEEE Communications Magazine, +vol. 59, no. 6, pp. 96–102, 2021. +[20] R. Carnap, Y. Bar-Hillel et al., “An outline of a theory of semantic +information,” 1952. +[21] H. Zhang, S. Shao, M. Tao, X. Bi, and K. B. Letaief, “Deep learning- +enabled semantic communication systems with task-unaware transmitter +and dynamic data,” arXiv preprint arXiv:2205.00271, 2022. +[22] P. A. Stavrou and M. Kountouris, “A rate distortion approach to goal- +oriented communication,” in 2022 IEEE International Symposium on +Information Theory (ISIT). +IEEE, 2022, pp. 590–595. +[23] A. Mostaani, T. X. Vu, S. Chatzinotas, and B. Ottersten, “State ag- +gregation for multiagent communication over rate-limited channels,” +in GLOBECOM 2020-2020 IEEE Global Communications Conference. +IEEE, 2020, pp. 1–7. +[24] D. Kim, S. Moon, D. Hostallero, W. J. Kang, T. Lee, K. Son, and Y. Yi, +“Learning to schedule communication in multi-agent reinforcement +learning,” in Intl. Conf. on Learning Representations, 2019. +[25] J. Liu, S. Shao, W. Zhang, and H. V. Poor, “An indirect rate-distortion +characterization for semantic sources: General model and the case of +gaussian observation,” arXiv preprint arXiv:2201.12477, 2022. +[26] C.-M. Chou, C.-Y. Li, W.-M. Chien, and K.-c. Lan, “A feasibility study +on vehicle-to-infrastructure communication: Wifi vs. wimax,” in 2009 +tenth international conference on mobile data management: systems, +services and middleware. +IEEE, 2009, pp. 397–398. +[27] Y.-C. Liu, J. Tian, C.-Y. Ma, N. Glaser, C.-W. Kuo, and Z. Kira, +“Who2com: Collaborative perception via learnable handshake commu- +nication,” in 2020 IEEE International Conference on Robotics and +Automation (ICRA). +IEEE, 2020, pp. 6876–6883. +[28] A. Barel, R. Manor, and A. M. Bruckstein, “Come together: Multi-agent +geometric consensus,” arXiv preprint arXiv:1902.01455, 2017. +[29] S. Tatikonda and S. Mitter, “Control under communication constraints,” +IEEE Transactions on automatic control, vol. 49, no. 7, pp. 1056–1068, +2004. +[30] J. N. Foerster, G. Farquhar, T. Afouras, N. Nardelli, and S. Whiteson, +“Counterfactual multi-agent policy gradients,” in Thirty-Second AAAI +Conference on Artificial Intelligence, 2018. +[31] F. A. Oliehoek, C. Amato et al., A concise introduction to decentralized +POMDPs. +Springer, 2016, vol. 1. +[32] Z. Ding, W. Hong, L. Zhu, T. Huang, and Z. Lu, “Sequential commu- +nication in multi-agent reinforcement learning,” 2021. +[33] J. Albowicz, A. Chen, and L. Zhang, “Recursive position estimation +in sensor networks,” in Proceedings Ninth International Conference on +Network Protocols. ICNP 2001. +IEEE, 2001, pp. 35–41. +[34] S. Dorvash and S. Pakzad, “Stochastic iterative modal identification al- +gorithm and application in wireless sensor networks,” Structural Control +and Health Monitoring, vol. 20, no. 8, pp. 1121–1137, 2013. +[35] D. V. Pynadath and M. Tambe, “The communicative multiagent team +decision problem: Analyzing teamwork theories and models,” Journal +of Artificial Intelligence Research, vol. 16, pp. 389–423, Jun. 2002. +[36] F. A. Oliehoek, M. T. Spaan, N. Vlassis et al., “DEC-PoMDPs with +delayed communication,” in Proc. Multi-agent Sequential Decision- +Making in Uncertain Domains, Honolulu, Hawaii, May 2007. +[37] R. Lowe, J. Foerster, Y.-L. Boureau, J. Pineau, and Y. Dauphin, “On +the pitfalls of measuring emergent communication,” in Intl. Conf. on +Autonomous Agents and MultiAgent Systems, 2019. +[38] L. Li, T. J. Walsh, and M. L. Littman, “Towards a unified theory of state +abstraction for mdps.” in AI&M, 2006. +[39] A. K. McCallum, Reinforcement learning with selective perception and +hidden state. +University of Rochester, 1996. +[40] R. S. Sutton and A. G. Barto, Introduction to reinforcement learning, +2nd ed. +MIT Press, Nov. 2017, vol. 135. +[41] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. +Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski +et al., “Human-level control through deep reinforcement learning,” +nature, vol. 518, no. 7540, pp. 529–533, 2015. +[42] P. Xuan, V. Lesser, and S. Zilberstein, “Communication decisions in +multi-agent cooperation: Model and experiments,” in Proceedings of the +Fifth International Conference on Autonomous Agents, ser. AGENTS +’01. New York, NY, USA: Association for Computing Machinery, 2001, +p. 616–623. [Online]. Available: https://doi.org/10.1145/375735.376469 +[43] C. Amato, J. S. Dibangoye, and S. Zilberstein, “Incremental policy +generation for finite-horizon dec-pomdps,” in Nineteenth International +Conference on Automated Planning and Scheduling, 2009. + diff --git a/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f60124dcaea2813e225ecdac63efcae5cdfa414a --- /dev/null +++ b/ndAzT4oBgHgl3EQfqf0O/content/tmp_files/load_file.txt @@ -0,0 +1,913 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf,len=912 +page_content='1 Task-Effective Compression of Observations for the Centralized Control of a Multi-agent System Over Bit-Budgeted Channels Arsham Mostaani, Student Member, IEEE, Thang X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vu, Senior Member, IEEE, Symeon Chatzinotas, Fellow Member, IEEE, and Bj¨orn Ottersten, Fellow Member, IEEE Abstract—We consider a task-effective quantization problem that arises when multiple agents are controlled via a centralized controller (CC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While agents have to communicate their obser- vations to the CC for decision-making, the bit-budgeted commu- nications of agent-CC links may limit the task-effectiveness of the system which is measured by the system’s average sum of stage costs/rewards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As a result, each agent should compress/quantize its observation such that the average sum of stage costs/rewards of the control task is minimally impacted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We address the problem of maximizing the average sum of stage rewards by proposing two different Action-Based State Aggregation (ABSA) algorithms that carry out the indirect and joint design of control and communication policies in the multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While the applicability of ABSA-1 is limited to single-agent systems, it provides an analytical framework that acts as a stepping stone to the design of ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-2 carries out the joint design of control and communication for a multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We evaluate the algorithms - with average return as the performance metric - using numerical experiments performed to solve a multi-agent geometric consensus problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The numerical results are concluded by introducing a new metric that measures the effectiveness of communications in a multi-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Index Terms—Semantic communications, task-effective data compression, goal-oriented communications, communications for machine learning, multi-agent systems, reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' INTRODUCTION As 5G is rolling out, a wave of new applications such as the internet of things (IoT), industrial internet of things (IIoT) and autonomous vehicles is emerging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It is projected that by 2030, approximately 30 billion IoT devices will be connected [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' With the proliferation of non-human types of connected devices, the focus of the communications design is shifting from traditional performance metrics, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', bit error rate and latency of communications to the semantic and task-oriented performance metrics such as meaning/semantic error rate [2], [3] and the timeliness of information [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To evaluate how efficiently the network resources are being utilized, one could traditionally measure the sum rate of a network whereas in the era of the cyber-physical systems, given the resource constraints of the network, we want to understand how effectively one can conduct a (number of) task(s) in the desired way [5], [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We are witnessing a paradigm shift in communication systems where the targeted performance metrics of the traditional systems are no longer valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This imposes new grand challenges in designing the communications towards the eventual task-effectiveness [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The authors are with the Centre for Security Reliability and Trust, Uni- versity of Luxembourg, Luxembourg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Emails: {arsham.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='mostaani, thang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='vu, symeon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='chatzinotas, bjorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='ottersten}@uni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='lu This work is supported by European Research Council (ERC) via the project AGNOSTIC (Grant agreement ID: 742648).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Environment Environment Controller 2 Control Control Controller 2 Sensor 1 Sensor 2 Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Local Observation Local Observation Reward/ cost Reward/ cost a) b) Local Observation Stage reward/ cost Local Observation Stage reward/ cost Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Task-effective communications for a) an estimation vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' b) a control task - the orange dashed box is detailed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This line of research is also driven partly due to the success of new machine learning technologies/ algorithms under the title of ”emergent communications” in multi-agent systems [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Transfer of these new technologies/ideas to communication en- gineering is anticipated to have a disruptive effect in multiple domains of the design of communication systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' According to Shannon and Weaver, communication prob- lems can be divided into three levels [8]: (i) technical problem: given channel and network constraints, how accurately can the communication symbols/bits be transmitted?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (ii) semantic problem: given channel and network constraints, how accu- rately the communication symbols can deliver the desired meaning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (iii) effectiveness problem: given channel and net- work constraints, how accurately the communication symbols can help to fulfil the desired task?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While the traditional com- munication design addresses the technical problem, recently, the semantic problem [2], [3], [5], [9], [10] as well as the effectiveness problem [6], [11]–[18] have attracted extensive research interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to Shannon’s technical-level communication framework, semantic communication can enhance perfor- mance by exploiting prior knowledge between source and destination [4], [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The semantic-based designs, however, are not necessarily task-effective [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' One can design transmitters which compress the data with the least possible compromise on the semantic meaning being transmitted [2], [3] while the transmission can be task-unaware [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to semantic level and technical level communication design, the performance of a task-effective communication system is ultimately measured in terms of the average return/cost linked to the task [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In the (task-)effectiveness problem, we are not concerned only about the communication of meaning but arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='01628v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='IT] 4 Jan 2023 2 also about how the message exchange is helping the receiving end to improve its performance in the expected cost/reward of an estimation task [4], [13], [14], [16], [22] or a control task [11], [12], [14], [17], [18], [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' There are fundamental differences between the design of task-effective communications for an estimation vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' a control task - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (i) In the latter, each agent can produce a control signal that directly affects the next observations of the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, in control tasks the source of information - local observations of the agent - is often a stochastic process with memory - e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' linear or Markov decision processes - [11], [17], [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In the estimation tasks, however, the source of information is often assumed to be an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' stochastic process [13], [16], [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (ii) In the control tasks, a control signal often has a long-lasting effect on the state of the system more than for a single stage/time step e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', a control action can result in lower expected rewards in the short run but higher expected rewards in the long run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This makes the control tasks intrinsically sensitive to the time horizon for which the control policies are designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Estimation tasks, specifically when the observation process is i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', can be solved in a single stage/ time step - since there is no influence from the solution of one stage/ time step to another i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', each time step can be solved separately [22], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (iii) The cost function for estimation tasks is often in the form of a difference/distortion function while in the control tasks it can take on many other forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In this paper, we focus on the effectiveness problem for the control tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular, we investigate the distributed communication design of a multiagent system (MAS) with the ultimate goal of maximizing the expected summation of per-stage rewards also known as the expected return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Multiple agents select control actions and communicate in the MAS to accomplish a collaborative task with the help of a central controller (CC) - i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' the communication network topology of the MAS is a star topology with the hub node being the central controller and the peripheral nodes being the agents - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The considered system architecture can find applications in several domains such as Internet of Things, emerging cyber- physical systems, real-time interactive systems, vehicle-to- infrastructure communication [26] and collaborative percep- tion [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Related works: Task-effective communications for control tasks Authors in [11], [12], [14], [17], [18], [23], [24] consider task-effective communication design under different settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While [12], utilizes the task-effective communication design for the specific problem of the design of application-tailored protocols over perfect communication channels, the communi- cation channel is considered to be imperfect in [11], [14], [17], [18], [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Authors in [14] provide algorithmic contribu- tions to the design of task-effective joint source channel coding for single agent systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Task-effective joint source and chan- nel coding for MAS is targeted by [11], [14], [17], whereas [18], [23] are focused on task-effective data compression and quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Similar to the current paper, a star topology 18 4th CET – Arsham Mostaani New Results for the Centralized Architecture Bit-budgeted Com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Perfect Com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' a) b) Processing and comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' power Sensing, actuation, comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' and processing power Sensing, comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' and processing power Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Communication topology and its applicable scenarios a) Centralized control of an MAS with collocated actuators and sensors, b) Distributed sensing with a single controller collocated with a single actuator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The orange dashed box is detailing the same box in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' for the inter-agent communication is considered in [11], [12] whereas [12] assumes perfect communications between the hub node and the peripherals and [11] assumes imperfect communication channels at the down-link of the peripheral nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to all the above-mentioned work, this paper is - to the best of our knowledge - the first to study the star topology with the uplink (agent to hub) channel be imperfect (bit-budgeted) - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, each agent observes the environment and communicates an abstract version of its local observation to the CC via imperfect (bit-budgeted) communication channels - red links in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Subsequently, CC produces control actions that are communicated to the agents via perfect communication channels - black links in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The control actions are selected by the CC such that they maximize the average return of the collaborative task, where the return is a performance metric linked to the accomplishment of the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Contributions In our earlier work [18], we have developed a generic framework to solve task-oriented communication problems - for a multi-agent system (MAS) with full mesh connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The current work can be considered as an adoption of that framework to a new problem setting for the design of task- effective communications where agents follow a star network topology for their connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In this direction, the current work transcends the applicability of the proposed framework beyond the specific problem that was solved in [18] and provides further insights into how the framework can be used in wider terms and under a wider range of settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular the contributions of this work are listed below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Firstly, we consider a novel problem setting in which an MAS is controlled via a central controller who has access to agents’ local observations only through bit-budgeted distributed communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This problem setting can 3 be used in collaboration perception systems as well as vehicle-to-infrastructure communications, which cannot been addressed by the problem settings investigated in the prior similar art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Secondly, our analytical studies establish the relationship between the considered joint communication and con- trol design problem and conventional data quantization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular, lemma 1 shows how the problem approached in this paper is a generalized version of the conventional data quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This formulation is useful as it helps to find an exact solution to the problem under stronger conditions via ABSA-1 and under milder conditions via ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Moreover, our analytical studies help us to craft an indi- rect 1 task-effective data quantization algorithm - ABSA- 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Designing a task-effective data quantization for ABSA- 2 can equivalently be translated as an indirect approach to feature selection for an arbitrary deep Q-network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Relying on the analysis carried out for ABSA-1, ABSA- 2 designs distributed and bit-budgeted communications between the agents and CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-2 is seen to approach optimal performance by increasing the memory of the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In fact, increasing the memory of CC leads to higher computational complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Therefore, ABSA-2 is said to strike a trade-off between computational complexity and task efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Numerical experiments are carried out on a geomet- ric consensus task to evaluate the performance of the proposed schemes in terms of the optimality of the MAS’s expected return in the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-1 and ABSA- 2 are compared with several other benchmark schemes introduced by [18], in a multi-agent2 scenario with local observability and bit-budgeted communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Finally, we will introduce a new metric, called task rele- vant information, for the measurement of effectiveness in task-oriented communication policies that - in compari- son with the existing metrics such as positive listening and positive signalling - better explains the behaviour of a variety of task-effective communication schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The proposed metric is capable of measuring the effec- tiveness of a task-oriented communication/compression policy without the need of testing a jointly designed control policy and testing the jointly designed policies in the desired task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Technical approach Our goal is to perform an efficient representation of the agents’ local observations to ensure meeting the bit-budget of the communication links while minimizing the effect of 1By an indirect algorithm here we mean an approach that is not dependent on our knowledge from a particular task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Indirect approaches are applicable to any/(wide range of) tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to indirect schemes, we have direct schemes that are specifically designed for a niche application [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As defined by [6]: ”the direct schemes aim at guaranteeing or improving the performance of the cyber-physical system at a particular task by designing a task-tailored communication strategy”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2Due to the complexity related issues explained in section IV, the numerical results are limited to two-agent and three-agent scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' quantization on the average return of the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To achieve this, we first need to design task-effective data quantization policies for all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In task-effective data quantization, one needs to take into account the properties of the average return function and the optimal control policies associated with the task [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In addition to the design of the quantization policies for all agents, we also need the control policy of the CC to be capable of carrying out near-optimal decision- making despite its mere access to the quantized messages - resulting in a joint control and data compression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We formulate the joint control and data compression problem as a generalized form of data compression: task-oriented data compression (TODC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Following this novel problem formula- tion, we propose two indirect action-based state aggregation algorithms (ABSA): (i) ABSA-1 provides analytical proof for a task-effective quantization i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e, with optimal performance in terms of the expected return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In this direction, ABSA-1 relaxes the assumption of the lumpability of the underlying MDP, according to which [18][condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6], the performance guarantees of the proposed method were established.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Since ABSA-1 is only applicable when the system is composed of one agent and the CC we also propose ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Following the analytical results of ABSA-1, given the help of MAP estimation to relax the aforementioned limitation of ABSA-1, and benefiting from a DQN controller at the CC;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-2 will be introduced as a more general approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (ii) ABSA-2 solves an approximated version of the TODC problem and carries out the quantization for any number of agents communicating with the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thanks to a deep Q-network controller utilized at the CC, ABSA-2 can solve more complex problems where the controller benefits from a larger memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, ABSA-2 allows trading complexity for communication efficiency and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Finally, we will evaluate the performance of the proposed schemes in the specific task: a geometric consensus problem under finite observability [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Organization The rest of this paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Section II describes the MAS and states the joint control and commu- nication problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Section III proposes two action-based state aggregation algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Section IV shows the performance of the proposed algorithms in a geometric consensus problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Finally, Section V concludes the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' For the reader’s convenience, a summary of the notation that we follow in this paper is given in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bold font is used for matrices or scalars which are random and their realizations follow simple font.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' SYSTEM MODEL AND PROBLEM STATEMENT The problem setting we introduce here can be used to analyse both scenarios illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Nevertheless, to use our language consistently, we focus on scenario (a) of that figure throughout the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular, when we use the term ”agent” we refer to an object which certainly has all the following hardware capabilities: sensing, actuation, communication and data processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A MAS, however, may not be comprised of mere agents, but of a combination 4 of agents and perhaps other objects that has at least the hardware capabilities for communication and data processing power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The central controller here is supposed to have the hardware capability to process relatively larger data as well as the capability of communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The interactions inside the MAS and outside the MAS with the environment are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' System model We consider a MAS in which multiple agents i ∈ N = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', N} collaboratively solve a task with the aid of a CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Following a centralized action policy, CC provides the agents with their actions via a perfect communication channel while it receives the observations of agents through an imperfect communication channel 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The considered setting is similar to conventional centralized control of MASs [18], [30], except for the fact that the communications from the agents to the CC are transmitted over a bit-budgeted communication channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The agent-hub communications are considered to be instantaneous and synchronous [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This is in contrast with the delayed [17], [31] and sequential/iterative communication models [32]–[34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We note that there is no direct inter-agent communication in the considered system - communications occur only between agents and the central controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The system runs on discrete time steps t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The observation of each agent i at time step t is shown by oi(t) ∈ Ω and the state s(t) ∈ S of the system is defined by the joint observations s(t) ≜ ⟨o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' , oN(t)⟩4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The control action of each agent i at time t is shown by mi(t) ∈ M, and the action vector m(t) ∈ MN of the system is defined by the joint actions m(t) ≜ ⟨m1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', mN(t)⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The observation space Ω, state- space S, and action space M are all discrete sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The environ- ment is governed by an underlying5 Markov Decision Process 3In this work we follow a common assumption used in the networked control literature [29] according to which the bit-budget only limits the uplink communications of the agents and not their downlink.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, the agents select their control actions as is dictated to them by the central controller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 4According to this definition, at any given time t the observations of any two agent i, j ∈ N are linearly independent in the Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The same conditions are true for the control actions of arbitrary agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5As defined in the literature [10], the underlying MDP’ is the horizon-T ′ MDP defined by a hypothetical single agent that takes joint actions m(t) ∈ MN and observes the nominal state s(t) ≜ ⟨o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' , oN(t)⟩ that has the same transition model T(·) and reward model r(·) as the environment experienced by our MAS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Symbol Meaning x(t) A generic random variable generated at time t x(t) Realization of x(t) X Alphabet of x(t) |X| Cardinality of X px � x(t) � Shorthand for Pr � x(t) = x(t) � H � x(t) � Information entropy of x(t) (bits) X−x X − {x} Ep(x){x} Expectation of the random variable X over the probability distribution p(x) tr(t) Realization of the system’s trajectory at time t Table I TABLE OF NOTATIONS 𝑃𝑟 𝑠′ 𝑠,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 𝑚) Environment Central Controller 𝜋1 𝑐 𝜋𝑚 Agent 1 Actuator 𝜋2 𝑐 Actuator Agent N 𝑐1 𝑐𝑁 Channel log2 |𝒞| ≤ 𝑅 log2 |𝒞| ≤ 𝑅 ǁ𝑐1 ǁ𝑐𝑁 𝑚1 𝑚𝑁 𝑚2 𝑚1 𝑜1 𝑜2 Channel Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Illustration of the interactions of the CC and agents for the control of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The red link shows the communication channels that are bit-budgeted - implying the local (and not global) observability of the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The orange dashed box is detailing the same box in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' that is described by the tuple M = � S, MN, r(·), γ, T(·) � , where r(·) : S × MN → R is the per-stage reward function and the scalar 0 ≤ γ ≤ 1 is the discount factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The function T(·) : S × MN × S → [0, 1] is a conditional probability mass function (pmf) which represents state transitions such that T � s(t + 1), s(t), m(t) � = Pr � s(t + 1)|s(t), m(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' According to the per-stage reward signals, the system’s return within the time horizon T ′ is denoted by g(t ′) = �T ′ t=t′ γt−1r � o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', oN(t), m1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', mN(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (1) While the system state is jointly observable by the agents [35], each agent i’s observation oi(t) is local 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Once per time step, agent i ∈ N is allowed to transmit its local observations through a communication message ci(t) to the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The communications between agents and the central controller are done in a synchronous (not sequential) and simultaneous (not delayed) fashion [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Each agent i generates its communi- cation message ci(t) by following its communication policy πc i (·) : Ω → C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In parallel to all other agents, agent i follows the communication policy πc i (·) to map its current observation oi(t) to the communication message ci(t) which will be received by the central controller in the same time- step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The code-book C is a set composed of a finite number of communication code-words s c, c′, c′′, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', c(|C|−1) - we use the same notation to refer to the different members of the action, observation and state spaces too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Agents’ communica- tion messages are sent over an error-free finite-rate bit pipe, with its rate constraint to be R ∈ R (bits per channel use) or equivalently (bits per time step).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As a result, the size of the quantization codebook should follow the inequality |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The CC exploits the received communication messages c(t) ≜ ⟨c1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', cN(t)⟩ within the last d number of time-steps to generate the action signal m(t) following the control policy πm(·) : CNd → MN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Based on the above description, the environment from the point of view of the CC 6In our problem setting, each agent does not see the environment as an MDP due to their local observability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We only assume the presence of an underlying MDP for the environment, which is widely adopted in the literature for the reinforcement learning algorithm, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', [36] [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We have this assumption as our performance guarantees rely on the optimality of the solution provided for the control task, which is also assumed in [7], [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Let us recall that throughout all of our numerical studies, even the CC, given joint observations of all agents, cannot observe the true/nominal state of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5 as well as from the agent’s point of view is not necessarily an MDP - as none is capable of viewing the nominal state of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Problem statement: Joint Control and Communication De- sign (JCCD) problem Now we define the JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Let M be the MDP governing the environment and the scalar R ∈ R to be the bit-budget of the uplink of all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' At any time step t′, we aim at selecting the tuple π = ⟨πm(·), πc⟩ with πc ≜ ⟨πc 1(·), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', πc N(·)⟩ to solve the following variational dynamic programming argmax π Eπ � g(t′) � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R, (2) where the expectation is taken over the joint pmf of the system’s trajectory {tr}T ′ t′ = o1(t′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', oN(t′), m(t′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', o1(T ′), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', oN(T ′), m(T ′), when the agents follow the policy tuple π.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In the next section, similar to [18] we will disentangle the design of action and communication policies via action-based quantization of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to [18], here the communication network of the MAS is assumed to follow a star topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The idea behind this disentanglement is to extract the features of the control design problem that can affect the communication design and to take them into account while designing the communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus our communication design will be aware of the key features of the control task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We extract the key features of the control task using analytical techniques as well as reinforcement learning [17], [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In fact, the new communication problem called TODC, will no longer be similar to the conventional communication problems, as it is inspired by the JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In [18], [23], authors use the value of agents’ observations for the given task as the key feature of the control task considered in the communication design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, the idea was to cluster together the observation points that have similar values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to [18], [23], which considers the value of observations as an explicit key feature of the control task, here we consider the optimal control/action values assigned to each observation as the key feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, ABSA clusters the observation values together, whenever the observation points have similar optimal control/action values assigned to them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Action-based state aggregation has been already introduced in the literature of reinforcement learning as a means for reducing the complexity of the reinforcement learning algorithms while maintaining the average return performance [38], [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ACTION-BASED LOSSLESS COMPRESSION OF OBSERVATIONS In this section, we will set yet another example - in addition to [18] - for the use of a generic framework to solve JCCD problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In [18], a similar problem is solved for distributed control and quantization, wherein, the authors disentangle the design of task-oriented communication policies and action policies given the aid of a hypothetical functional Πm∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular, the functional Πm∗ is a map from the vector space Kc of all possible communication policies πc to the vector space Km of optimal corresponding control policy πm∗(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Upon the availability of the functional Πm∗, wherever the function πm appears in the JCCD problem, it can be replaced with Πm∗(πc) resulting in a novel problem in which only the communication policies πc are to be designed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While in [18], authors use an approximation of Πm∗(πc) to obtain a task-oriented quantizer design problem, in the current work we derive an exact solution for a simplified version of (3) - where the number of agents communicating with the central controller is limited to one agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To adapt ABSA to the generic setting of the problem (3), in ABSA-2, we will lift this limitation given the aid of an approximation technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The JCCD problem can already be formulated as a form of data-quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lemma 1, identifies the quan- tization metric that we aim to optimize in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It reformulates the JCCD problem as a novel generalized data quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The JCCD problem (2) can also be expressed as a generalized data quantization problem as follows argmin π Ep(s(t)) ���V π∗� s(t) � − V πm� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R, (3) where the communication vector c(t) generated by πc is a quantized version of the system’s state s(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ■ In contrast to the classic data-quantization problems, here the distortion metric, measures the difference between two dif- ferent functions of the original signal and its quantized version namely V π∗(·) and V πm(·) - thus the distortion measure that we aim to optimize by solving (3) is not conventional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In fact, the variational minimization problem is solved over the vector space of joint quantization policies πc and action policy πm functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-1 Algorithm The applicability of the proposed ABSA-1, is limited to two mathematically equivalent scenarios: (i) we have a single agent communicating to the CC - consider the Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2-a, with only one agent connected to the CC - or (ii) that the agents communicate with the CC through a relay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In the latter scenario, the relay has full access to the agents’ communication observation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', oi, ∀i ∈ N, while the relay to CC channel is bit-budgeted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This limited scenario is useful for us to facilitate our analytical studies on the problem (3), allowing us to establish theoretical proof for the losslessness of compression in ABSA-1 as well as its optimal average return performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' These statements will be confirmed by Lemma 2 - the results of which will also be useful to design ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The central idea of ABSA-1 is to represent any two states s(i), s(j) using the same communication message c iff π∗� s(i)� = π∗� s(j)� , where π∗(·) : S → MN is the optimal control policy of the agents, given the access of observations from all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, ABSA-1 and ABSA-2 solve the JCCD problem at three different phases: (i) solving the centralized control problem under perfect communications via reinforcement learning i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', 6 Q-learning, to find π∗(·)7, (ii) solving the task-oriented data quantization problem to find πc via a form of data clustering, (iii) finding the πm corresponding to πc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In order to explain ABSA-1, we introduce the problem of task-oriented data compression with centralized control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' TBIC is derived using similar techniques in [18] but for a different setting i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', the communication network of MAS has a star topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The TBIC problem is no longer a joint control and communication problem but is a quantization design problem in which the features of the control problem are taken into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To arrive to TODC problem from the JCCD problem, we use the functional Πm∗ to replace πm(·) with Πm∗� πc� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Upon the availability of Πm∗, by plugging it into the JCCD problem (2), we will have a new problem argmin πc Ep(s(t)) ���V π∗� s(t) � − V Πm∗� πc�� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R, (4) where we maximize the system’s return with respect to only the communication policies πc(·) of the local relay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The optimal control policy πm∗(·) of the CC is automatically computed by the mapping Πm∗� πc(·) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The problem is called here as the TODC problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Upon the availability of Πm∗, the JCCD problem (2) can be reduced to (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Definition 1 is provided to formalize a precise approach to solve (4) via obtaining the communication policy of the relay πc(·) as well as the corresponding Πm∗, to solve (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Quantization and control policies in ABSA-1: The communication policy πc,ABSA−1(·) designed by ABSA-1 will be obtained by solving the following k-median clustering problem min P �|C| i=1 � s(t)∈Pi ���π∗� s(t) � − µi ���, (5) where P = {P1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', PB} is a partition of S and µi is the centroid of each cluster i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The communication policy of ABSA- 1 - πc,ABSA−1(·) - is an arbitrary non-injective mapping such that ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', B} : πc,ABSA−1(s) = c(k) if and only if s ∈ Pk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Now let Cg be a function composition operator such that Cgf = g ◦ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We define the operator Πm∗ ≜ Cg, with g = π∗� πc,ABSA−1−1(·) �8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The optimality of the proposed ABSA-1 algorithm is sub- sequently provided in Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The communication policy πc,ABSA−1 - as de- scribed by Definition 1 - will carry out lossless compression of observation data w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' the average return if |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ■ Remark: ABSA-1 will also carry out lossless compression of observation data with respect to the distortion measure 7ABSA’s bottleneck arises from the increasing complexity of Q-learning as agents increase in number N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Similar limitations are in place for any other algorithm that requires a centralized training phase [7], [30] 8Note that as πc,ABSA−1(·) is non-injective, its inverse would not produce a unique output given any input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, by π∗� πc,ABSA−1−1(c′) � we mean π∗� s′� , where s′ can be any arbitrary output of πc,ABSA−1−1(c′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' introduced in problem (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Given the proofs of lemma 2 and lemma 1, the proof of this remark is straightforward and is therefore, omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The losslessness of quantization in ABSA-1 implies that the πABSA−1 will result in no loss of the system’s average return, compared with the case where the optimal policy π∗(·) is used to control the MAS under perfect communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Consequently, the control policy πm,ABSA−1(·) is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Let us recall once again that here, we do not use a conventional quantization distortion metric, we select a representation of local observation in such a way that the conveyed message maximizes the average task return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Note that in [7], the authors do not find the higher order function Πm∗ that reduces the joint communications and control problem to a task-oriented communication design - instead they solve an approximated version of the task-oriented communication design problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In this paper, however, we introduce a closed form Πm∗ by ABSA-1 that can map every communication policy πc,ABSA−1 introduced by ABSA-1, to the exact optimal control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This implies that the solutions provided by ABSA-1 are also the optimal solutions of the joint communication and control design (JCCD) problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ABSA-2 Algorithm We saw earlier in lemma 2 that the communication policy obtained by solving the problem 5 is optimal and can result in a lossless average return performance when |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To solve the problem 5, however, we need to know π∗� s(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This is a limiting assumption that in ABSA-1 can be translated to two different system models which are less general than the system pictured in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3: (i) presence of an extra relay between the agents and the central controller where the relay has perfect downlink channels to agents and a single bit- budgeted channel to the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (ii) The MAS is only composed of one single agent and a CC where the uplink of the agent is bit-budgeted but its downlink is a perfect channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Our second proposed algorithm ABSA-2 removes the need to know π∗� s(t) � and can run under the more general settings shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This is done by approximating the local element m∗ i (t) of π∗� s(t) � = ⟨m1 ∗ (t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', mN ∗ (t)⟩ at agent agent i given the local observation of this agent oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' That is, given a centralized training phase, we will have access to the empirical joint distribution of p(oi, m∗ i ), using which we can obtain a numerical MAP estimator of ˆ m∗i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus ABSA-2 allows for fully distributed communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In particular, the encoding of the communication messages of each agent is carried out separately by them before they communicate with CC or any other agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This form of encoding is often referred to as distributed encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Furthermore, the encoding carried out by ABSA-2 at each agent is a low-complexity and low-power process that requires no inter-agent communications before hands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In this case, each agent directly communicates its encoded observations to the CC via a bit-budgeted communication channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In order to improve the learning efficiency at CC, it can take into account all the communications received in the time frame [t − d, t] to make a control decision m(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Therefore, the ABSA-2 7 \u0de4𝜋𝑖 ∗ ⋅ : Ω → ℳ Ω Ω ⊂ ෑ 𝑖=1 𝑁 ℝ Ω × ℳ Clustering observation points over their action values 𝒫i,1 𝒫𝑖,2 𝒫𝑖,3 ABSA-2 Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Abstract representation of states in ABSA-2 with |C| = 3 and |M| = 5 - |M| is represented by the number of shapes selected to show the observation points and |C| is represented by the number of clusters shown in the right subplot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The left subplot shows the observation points prior to aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' During a centralized training phase we first compute π∗(·) according to which π∗ i (·) : Ω → M can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We use the surjection π∗ i (·) to map a high dimensional/precision observation space to a low dimensional/precision space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The middle subplot shows the observation points together with the action values assigned to them - each unique shape represents a unique action value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This new representation of the observation points, embeds the features of the control problem into the data quantization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Finally, we carry out the clustering of observation points according to their action values - all observation points assigned to (a set of) action values are clustered together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The right subplot shows the aggregated observation space, where all the observation points in each cluster will be represented using the same communication message.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The centralized controller which is run using DQN, observes the environment at each time step, through all these aggregated observations/communications it receives from all the agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' algorithm can strike a trade-off between the complexity of the computations carried out at the CC - directly impacted by the value of d - and effectiveness of agents’ communications inversely impacted by the value of |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Moreover, ABSA-2 is straightforwardly extendable to the different values of |C| per each agent i, instead of having only one fixed bit-budget R = log2 |C| for all agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 4, ABSA-2, each agent i obtains a communication policy function πc i (·) by solving a clustering problem over its local observation space instead of the global state space, formulated as follows: min Pi �|C| j=1 � oi(t)∈Pi,j ���˜π∗ i (oi(t)) − µi,j ���, (6) where Pi = {Pi,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', Pi,|C|} is a partition of Ω, and ˜π∗ i (oi(t)) = argmaxm∗ i pπ∗(m∗ i |oi(t)), (7) and m∗ i is the optimal action of agent i, which is i-th element of m∗ ≜ π∗� o1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', oN(t) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus ˜π∗ i (oi(t)) is the maximum aposteriori estimator of m∗ i = π∗� s(t) � given the local observation oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Once the clustering in (6) is done, each agent i will train its local communication policy πc,ABSA−2 i (·), which is any non-injective mapping such that ∀k ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', |C|} : πc,ABSA−2 i (oi) = c(k) iff oi ∈ Pi,k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' After obtaining the communication policies ⟨πc,ABSA−2 i (·)⟩N i=1, to obtain a proper control πm(·) policy at the CC corresponding to the com- munication policies, we perform a single-agent reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To this end and to manage the complexity of the algorithm for larger values of d, we propose to use DQN architecture [41] at the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' PERFORMANCE EVALUATION In this section, we evaluate our proposed schemes via nu- merical results for the popular multi-agent geometric consen- Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Action Based State Aggregation (ABSA-2) 1: Initialize replay memory D to capacity 10’000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2: Initialize state-action value function Q(·) with random weights θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3: Initialize target state-action value function Qt(·) with weights θt = θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 4: Obtain π∗(·) and Q∗(·) by solving (2) using Q-learning [40]*, where R >> H(oi(t)) ∀i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5: Compute π∗ i (oi(t)) = Mode � m∗ i |oi(t) � , for ∀oi(t) ∈ Ω, for i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6: Solve problem (5) by applying k-median clustering to obtain Pi and πc i (·) , for i ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 7: for each episode k = 1 : 200’000 do 8: Randomly initialize observation oi(t = 0), for i ∈ N 9: Randomly initialize the message c(t = 0) 10: for t = 1 : T ′ do 11: Select ci(t), at agent i, following πc i (·), for i ∈ N 12: Obtain the message ⟨c1(t), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' cN(t)⟩ at the CC 13: Follow ϵ-greedy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' at CC,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' to generate the action mi(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' for i ∈ N 14: Obtain reward r(t) = R � s(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m(t) � at the CC 15: Store the transition � c(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' r(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' c(t + 1) � in D 16: t ← t + 1 17: end 18: Sample D′ = � c(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' r(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' c(t′ +1) �t′=t′ 62 t′=t′ 1 from D 19: for each transition t′ = t′ 1 : t′ 62 of the mini-batch D′ do 20: Compute DQN’s average loss Lt′(θ) = 1 2 � r(t′) + max m∗ Qt� c(t′ + 1),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' θt� − max m∗ Q � c(t′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' θ ��2 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 21: Perform a gradient descent step on Lt′(θ) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t θ 22: end 23: Update the target network Qt(·) every 1000 steps 24: end 8 sus problem9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Through indirect design, ABSA-1 and ABSA- 2 never rely on explicit domain knowledge about any spe- cific task, such as geometric consensus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, we conjecture that their indirect design allows them to be applied beyond geometric consensus problems and to a much wider range of tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To make the geometric consensus task suitable for the evaluation of our proposed algorithms, similar to [18], we have introduced a bit constraint to the communication channel between the agents and the CC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' After evaluating the proposed algorithms in the context of the rendezvous problem, we attempt to explain the behaviour of all the algorithms via the existing metric - positive listening - for measuring the task- effectiveness of communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As positive listening falls short in explaining all the aspects of the behaviour of the investigated algorithms, we will also introduce a new metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Called task relative information, the new metric assists to further explain the behaviour of different algorithms with a higher accuracy and reliability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The geometric consensus problem Our proposed schemes are evaluated in this section through numerical results for the rendezvous problem [42], [43], which is a specific type of geometric consensus problems under finite observability [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Following the instantaneous and synchronous communication model and the star network topology explained in section II-A and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2 respectively, the rendezvous problem is explained as the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' At each time step t several events happen in the following order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' First, an agent i obtains a local observation oi(t) - which is equivalent to its own location in the grid-world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The agent i, subsequently, follows its quantization/communication policy to generate a compressed version ci(t) of its observation to be communicated to the CC via bit-budgeted communication links.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' After receiving the quantized observations of all agents, CC follows its control policy to decide and select the joint action vector m(t) and communicate each agent i’s local action mi(t) to it accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The local action mi(t) ∈ M that is communicated back to the agent i via a perfect communication channel is a one directional move in the greed world, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e, M = { left, right, up, down, pause}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Given each agent i’s action mi(t) the environment evolves and transitions to the next time step t + 1 where each agent i obtains a new local observation oi(t + 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' All agents receive a single team reward rt = � � � � � C1, if ∃ i, j ∈ N : oi(t) ∈ ΩT & oj(t) /∈ ΩT C2, if ∄ i ∈ N : oi(t) ∈ Ω − ΩT , 0, otherwise, (8) where C1 < C2 and ΩT is the set of terminal observations i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', the episode terminates if ∃ i ∈ N : oi(t) ∈ ΩT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, when not all agents arrive at the target point, a smaller reward C1 = 1 is obtained, while the larger reward C2 = 10 is attained when all agents visit the goal point at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 9In our numerical experiments, the discount factor is assumed to be γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' All experiments are done over a grid world of size 8×8, where the goal point of the rendezvous is located at the grid number ΩT = {22}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We compare our proposed ABSA algorithms with the heuristic non-communicative (HNC), heuristic optimal communication (HOC) and SAIC algorithms proposed in [18] which are direct schemes to jointly design the communication and control policies for the specific geometric consensus problem solved here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In contrast to ABSA-1 and ABSA-2 which enjoy an indirect design, the direct design of HOC and HNC does not allow them to be applied in any other problem rather than the specific geometric consensus problem with the finite observability i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', the rendezvous problem explained here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Numerical experiment A constant learning rate α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='07 is applied when exact Q- learning is used to obtain π∗(·) and α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='0007 when DQN is used to learn πm(·) for ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' For the exact Q-learning, a UCB10 exploration rate of c = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='25 considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The deep neural network that approximates the Q-values is considered to be a fully connected feed-forward network with 10 layers of depth, which is optimized using the Adam optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' An experience reply buffer of size 10’000 is used with the mini- batch size of 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The target Q-network is updated every 1000 steps and for the exploration, decaying ϵ-greedy with the initial ϵ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='05 and final ϵ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='005 is used [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In any figure that the performance of each scheme is reported in terms of the averaged discounted cumulative rewards, the attained rewards throughout training iterations are smoothed using a moving average filter of memory equal to 20,000 iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As explained in section III-A, ABSA-1 and ABSA-2 both require a centralized training phase prior to be capable of being executed in a distributed fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' For all black curves, one prior centralized training phase to obtain π∗(·) is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As detailed in Section III, the proposed algorithms, ABSA-1 and ABSA-2, leverage π∗(·) to design πc and then πm afterwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Dashed curves, HOC and HNC, as proposed by [18] provide heuristic schemes which exploit the domain knowledge of its designer about the rendezvous task making it not applicable to any other task rather than the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' While HOC enjoys a joint control and communication design, HNC runs with no communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Note that HNC & HOC require communica- tion/coordination between agents prior to the starting point of the task - which is not required for any other scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' These schemes, introduced by [18], are detailed as the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A joint communication and control policy is designed using domain knowledge in the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' HNC agents approach the goal point and wait nearby for a sufficient number of time steps to ensure that the other agent has also arrived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Only after that, they will get to the goal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Note that this scheme requires communication/coordination between agents prior to the starting point of the task, since they have to have had agreed upon this scheme of coordination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A joint communication and control policy is designed using domain knowledge in the rendezvous problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 10UCB is a standard scheme used in exact reinforcement learning to strike a trade-off between the exploration and exploitation [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 9 0 2 4 6 8 10 12 14 16 18 Training Iterations 104 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 5 Average Return Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Average return comparison made between the proposed schemes and some benchmarks introduced in [18] - the three agent scenario under constant bit-budget values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' HOC agents wait next to the goal point until the other agent informs them that they have also arrived there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Only after that, they will get to the goal point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Note that this scheme requires communication/coordination between agents prior to the starting point of the task, since they have to have had agreed upon this scheme of coordination and communications as well as on the the meaning that each communication message entails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To obtain the results demonstrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5, we have simulated the rendezvous problem for a three-agent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The black curves illustrate the training phase that is occurring at CC to obtain πm after πc is already computed using equations (5) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We observe the lossless performance of ABSA-1 in achieving the optimal average return without requiring any (2nd round) training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' To enable fully decen- tralized quantization of the observation process, ABSA-2 was proposed which is seen to approach the optimal solution as d grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' All ABSA-2 curves are plotted with |C| = 3, and ABSA-1 curve is plotted with |C| = |M|N = 125 in 3 agent scenarios - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5 - and |C| = |M|N = 25 in the two agent scenario - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5, we see how the performance of ABSA-2 compares with HNC, HOC and SAIC at different rates of quantization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As expected, with the increase in the size of the quantiza- tion codebook, the average return performance of ABSA-2 is gradually improved, such that it approaches near-optimal performance at d = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We also observe the superior per- formance of ABSA-2 compared with SAIC at very tight bit- budgets where SAIC’s performance sees a drastic drop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It is observed that as d grows, ABSA-2 approaches the optimal return performance even under higher rates of quantization, however, higher values of d come at the cost of the increased computational complexity of ABSA-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Explainablity of the learned communication policies One common metric to evaluate the effectiveness of communications in the literature [37] is positive listening I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' mj(t) � j ∈ N −{i}, which is the mutual information 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='8 1 Normalized Average Return Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The obtained normalized average return as a function of codebook size |C| is compared across a range of schemes: proposed schemes and some benchmarks introduced in [18] - two-agent scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' between the communication ci(t) produced by an agent i and the action mj(t) selected by another agent following the receipt of the communication ci(t) from agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Positive signaling I � oi(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ci(t) � is another metric proposed by [37], measuring the mutual information between agent i’s observa- tion oi(t) and its own produced communication message ci(t) at the same time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As to be shown below, however, these metrics are unable to fully capture the underlying performance trends of all schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Therefore, we, for the first time, introduce a new metric called task relevant information (RI) allowing us to explain the task-effectiveness of the learned communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Measuring positive listening is one way to quantify the contribution of the communicated messages of agent i to the action selection of agent j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Positive signalling, on the other hand, measures the consistency as well as the relevance of the communicated messages ci(t) and the agent’s observations oi(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As SAIC and ABSA use a deterministic mapping of observation oi to produce the communication message ci, they are always guaranteed to have positive signalling [37] - the degree of which, however, is limited by the uplink channel’s bit budget R = log2 |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, among the existing metrics for the measurement of the effectiveness of communications, we limit our numerical studies to the measurement of positive listening.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It is known that the higher positive listening is, the stronger (not necessarily better) we expect the coordination between the agents to be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' That is, the higher positive listening means higher degree of dependence between agents (their actions and observations) which is not necessarily sufficient for the team agents to fulfill the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Figure 7 explains how stronger coordination between agents and the CC is often resulting in an increased performance of the MAS in obtaining a higher average return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' For instance, the enhancement in the positive-listening performance of SAIC from |C| = 3 to |C| = 4 quantizer in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 7 is resulting in an improved average return performance, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This metric also reasonably explains the enhancement of ABSA-2 performance in obtaining higher return by increasing d - the memory of the CC - and the size of the quantization codebook 10 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='7 Positive Listening (bits) Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Comparing the positive listening I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' mj(t) � performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Moreover, stronger coordination between agents and CC is visible in ABSA-2 when compared with HOC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, we expect better average return performance for ABSA-2 which is in contrast to the results of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' This event suggests that stronger coordination - measured by positive listening may not necessarily result in an improved average return performance as the coordination may not be perfectly aligned with task needs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The curve concerning the HOC scheme allows us to recall that a positive listening of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='3 (bit) is sufficient to maintain the coordination required for optimal performance in the afore- mentioned geometric consensus task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Therefore, in the ABSA- 2 and SAIC schemes, there is still an unnecessary influence from the side of the communication messages to the actions selected by the receiving end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In fact, not all the information received from the receiving end has contributed to the higher average return of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, there is yet, some unnecessary data in the communication messages designed by ABSA that contain no task-specific/useful information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus we believe that positive listening cannot explicitly quantify the effectiveness of the task-oriented communica- tion algorithms;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' therefore they fall short in explaining the behaviour of these algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Even when positive listening is computed as I (ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m(t)) to capture the mutual information between the communication of agent i and the control signals of all agents we arrive at almost similar patterns - Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Figure 9, investigates the performance of multiple schemes via a novel performance metric: task relevant information (TRI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Here we define the task relevant information metric to be I � πc� oi(t) � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' π∗� s(t) �� = I � ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m∗(t) � , (9) which measures the mutual information (in bits) between the communicated message of agent i and the vector m∗(t) of joint optimal actions at the CC - which is selected by the optimal centralized control policy π∗(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' As demonstrated by Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 9, TRI is an indirect metric of the effectiveness of communications that can explain the behaviour of different 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 5 Size of the quantization codebook 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='2 Positive Listening (bits) Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Comparing the positive listening I (ci(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' m(t)) performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' communication designs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It is also observed that the TRI metric magnifies the performance gap between different schemes as they get closer to the optimal performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Nevertheless, TRI can be utilized as a standalone measure to quantify the effectiveness of a communication design since it almost perfectly predicts the average return performance of the a com- munication policy - without the need for the communication to be tested when solving the real task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Note that, we measure the task-effectiveness of a quan- tization algorithm based on the average return that can be obtained when using it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Further, to measure the average return that can be obtained under the communication poli- cies ⟨πc 1(·), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', πc N(·)⟩, we have to design the control policy πm(·) at the CC that selects the control vector m(t) having access to only the quantized observations of the agents c(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, we cannot measure the effectiveness of the communication policy of an MAS without having a specific design for their control policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Even after the design of the control policy of the MAS, it is challenging to understand if the suboptimal performance of the algorithm is caused by an ineffective design of the control policy or the communication policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' In fact, it is hard disentangle the effect of the control and communication policies on the MAS’s average return.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Our proposed metric TRI can facilitate measuring the performance of any communication policy in isolation and without the effect of the control policy being present in the numerical values of TRI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Accordingly, the importance of introducing this metric is multi-fold: (i) by using TRI as an indirect metric we can measure the effectiveness of a communication policy for any specific task;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (ii) it allows us to measure the effectiveness of the communication scheme prior to the design of any control policy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (iii) it helps to design task effective communication policies in complete separation from the control policy design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' CONCLUSION In this paper, we have investigated the joint design of control and communications in an MAS under centralized control and distributed communication policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We first proposed an action-based state aggregation algorithm (ABSA-1) for 11 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 5 Number of Communication Symbols 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5 ABSA-2 d=1 ABSA-2 d=2 ABSA-2 d=3 SAIC d=1 HOC d=1 Figure 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Comparing the task relevant information (TRI) performance across a range of schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' It is observed that TRI can comprehensively explain the behaviour of all task-effective quantization schemes in a certain task without the need to measure their effectiveness via their resulting average return in the task - compare this figure with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' lossless compression and provided analytical proof of its optimality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Then we proposed ABSA-2, which offers a fully distributed communication policy and can trade computational complexity for communication efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We finally demon- strated the task-effectiveness of the proposed algorithms via numerical experiments performed on a geometric consensus problem via a number of representative metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Furthermore, our numerical studies demonstrate the pressing need for further research on finding a metric that can measure/explain the task-effectiveness of communications with more accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' And, scalability in task-oriented design is yet another central challenge to be addressed in future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' APPENDIX A PROOF OF LEMMA 1 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Applying Adam’s law on equation (2) yields argmax π Ep(c(t)) � Epπc,πm ({tr}T ′ t′ |c(t)) � g(t′)|c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R (10) where c(t) is generated by the communication policy πc and the joint pmf of the system’s trajectory {tr}T ′ t′ is directly influenced by the action policy πm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' The conditional pmf pπc,πm({tr}T ′ t′ |c(t)) is the joint probability of the trajectory of the system given the received communication c(t) when policies πc(·) and πm(·) are followed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We proceed by negating the equation (10) and adding a second term to the objective function which is constant with respect to the decision vari- ables of the problem to have argmin πc Ep(s(t)) � Epπ∗ ({tr}T ′ t′ |s(t)) � g(t′)|s(t) �� − (11) Ep(c(t)) � Epπc,πm ({tr}T ′ t′ |c(t)) � g(t′)|c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We replace the conditional expectation of system return by the value function V (·), [40](Ch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='5), and we will have argmin πc Ep(s(t)) � V π∗� s(t) �� − Ep(c(t)) � V πm� c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (12) Note that the empirical joint distribution of c(t) can be obtained by following the communication policy πc on the empirical distribution of s(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' argmin πc Ep(s(t)) � V π∗� s(t) �� − Ep(s(t)) � V πm� c(t) �� , s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (13) As V π∗� s(t) � − V πm� c(t) � ≥ 0 is true for any s(t) ∈ S, merging the two expectations results in argmin πc Ep(s(t)) ���V π∗� s(t) � − V πm� c(t) ����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R, (14) which concludes the proof of the lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ■ APPENDIX B PROOF OF LEMMA 2 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We depart from the result of lemma 1 - problem (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' By taking the expectation over the empirical distribution of s(t) and applying Bellman optimality equation, we obtain argmin π 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � −Qπm� c(t), πm� πc(s(t)) �����, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' |C| ≤ 2R, (15) where the vector πc(s(t)) is of N dimensions and its i-th element is ci(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' We proceed by plugging πc,ABSA−1(·) and Πm∗, according to the definition 1, into the equation (15) to obtain 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � − Qπ∗� c(t), π∗� s′�����, (16) where s′ = πc,ABSA−1−1� πc,ABSA−1� s(t) �� , and any pos- sible value for it lies in the same subset Pk′ as s(t) does, while according to the definition of Pk′, we know π∗(s(t)) = π∗(s′), if |C| ≥ |M|N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Thus, by replacing π∗(s′) in with π∗(s(t)) in equation (17) we get 1 n n � t=1 ���Qπ∗� s(t), π∗(s(t)) � − Qπ∗� s(t), π∗� s(t) ����� = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' (17) This concludes the proof of theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ■ REFERENCES [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vailshery, “Number of internet of things (iot) connected devices worldwide from 2019 to 2021, with forecasts from 2022 to 2030,” Aug 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Available: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='statista.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='com/statistics/1183457/ iot-connected-devices-worldwide/ [2] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' G¨uler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Yener, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Swami, “The semantic communication game,” IEEE Transactions on Cognitive Communications and Network- ing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 4, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 787–802, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [3] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Saad, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Yin, “Federated learning based audio semantic communication over wireless networks,” in 2021 IEEE Global Communications Conference (GLOBECOM), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [4] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Pappas and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kountouris, “Goal-oriented communication for real- time tracking in autonomous systems,” in 2021 IEEE International Conference on Autonomous Systems (ICAS), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [5] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Calvanese Strinati and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Barbarossa, “6g networks: Beyond shannon towards semantic and goal-oriented communications,” Computer Net- works, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 190, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 107930, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Sharma, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Liao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chatzinotas, “Task-oriented communication system design in cyber-physical systems: A survey on theory and applications,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='07166, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 12 [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Foerster, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Assael, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' de Freitas, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Whiteson, “Learning to communicate with deep multi-agent reinforcement learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, Barcelona, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Shannon and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Weaver, “The mathematical theory of communi- cation [1949].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' urbana, il,” 1959.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Hu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Xing, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Wang, “Things2vec: Semantic modeling in the internet of things with graph representation learning,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1939–1948, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Cai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Luo, “Seminer: Side-information-based seman- tics miner for proprietary industrial control protocols,” IEEE Internet of Things Journal, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 22, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 22 796–22 810, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [11] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tung, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kobus, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Roig, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' G¨und¨uz, “Effective communi- cations: A joint learning and communication framework for multi-agent reinforcement learning over noisy channels,” IEEE Journal on Selected Areas in Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 39, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2590–2603, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mota, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Valcarce, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Gorce, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Hoydis, “The emergence of wireless mac protocols with multi-agent reinforcement learning,” arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='07144, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [13] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Shlezinger and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Eldar, “Deep task-based quantization,” Entropy, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 104, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [14] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Gutierrez-Estevez, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Wu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhou, “Learning to commu- nicate with intent: An introduction,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='09613, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lasaulce, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Saad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kountouris, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bennis, “Goal-oriented communications for the iot and application to data compression,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='05378, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [16] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Shlezinger and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Eldar, “Task-based quantization with application to mimo receivers,” arXiv preprint arXiv:2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='04290, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mostaani, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Simeone, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ottersten, “Learning- based physical layer communications for multiagent collaboration,” in 2019 IEEE Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Symp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' on Personal, Indoor and Mobile Radio Communications, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ottersten, “Task-oriented data compression for multi-agent communications over bit-budgeted channels,” IEEE Open Journal of the Communications Society, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1867–1886, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kountouris and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Pappas, “Semantics-empowered communication for networked intelligent systems,” IEEE Communications Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 59, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 96–102, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [20] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Carnap, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bar-Hillel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', “An outline of a theory of semantic information,” 1952.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [21] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Shao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bi, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Letaief, “Deep learning- enabled semantic communication systems with task-unaware transmitter and dynamic data,” arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='00271, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Stavrou and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kountouris, “A rate distortion approach to goal- oriented communication,” in 2022 IEEE International Symposium on Information Theory (ISIT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IEEE, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 590–595.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mostaani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chatzinotas, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ottersten, “State ag- gregation for multiagent communication over rate-limited channels,” in GLOBECOM 2020-2020 IEEE Global Communications Conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1–7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [24] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Moon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Hostallero, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Son, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Yi, “Learning to schedule communication in multi-agent reinforcement learning,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' on Learning Representations, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [25] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Shao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Poor, “An indirect rate-distortion characterization for semantic sources: General model and the case of gaussian observation,” arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='12477, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [26] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chou, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chien, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lan, “A feasibility study on vehicle-to-infrastructure communication: Wifi vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' wimax,” in 2009 tenth international conference on mobile data management: systems, services and middleware.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IEEE, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 397–398.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [27] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ma, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Glaser, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kuo, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kira, “Who2com: Collaborative perception via learnable handshake commu- nication,” in 2020 IEEE International Conference on Robotics and Automation (ICRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 6876–6883.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [28] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Barel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Manor, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bruckstein, “Come together: Multi-agent geometric consensus,” arXiv preprint arXiv:1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='01455, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tatikonda and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mitter, “Control under communication constraints,” IEEE Transactions on automatic control, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 49, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1056–1068, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [30] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Foerster, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Farquhar, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Afouras, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Nardelli, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Whiteson, “Counterfactual multi-agent policy gradients,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [31] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Oliehoek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Amato et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', A concise introduction to decentralized POMDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Springer, 2016, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [32] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ding, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Hong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Huang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lu, “Sequential commu- nication in multi-agent reinforcement learning,” 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [33] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Albowicz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Chen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zhang, “Recursive position estimation in sensor networks,” in Proceedings Ninth International Conference on Network Protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' ICNP 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' IEEE, 2001, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 35–41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Dorvash and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Pakzad, “Stochastic iterative modal identification al- gorithm and application in wireless sensor networks,” Structural Control and Health Monitoring, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 1121–1137, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [35] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Pynadath and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Tambe, “The communicative multiagent team decision problem: Analyzing teamwork theories and models,” Journal of Artificial Intelligence Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 16, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 389–423, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [36] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Oliehoek, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Spaan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Vlassis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', “DEC-PoMDPs with delayed communication,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Multi-agent Sequential Decision- Making in Uncertain Domains, Honolulu, Hawaii, May 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [37] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lowe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Foerster, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Boureau, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Pineau, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Dauphin, “On the pitfalls of measuring emergent communication,” in Intl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' on Autonomous Agents and MultiAgent Systems, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [38] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Walsh, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Littman, “Towards a unified theory of state abstraction for mdps.” in AI&M, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [39] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' McCallum, Reinforcement learning with selective perception and hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' University of Rochester, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [40] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Sutton and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Barto, Introduction to reinforcement learning, 2nd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' MIT Press, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 2017, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [41] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Mnih, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Kavukcuoglu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Silver, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Rusu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Veness, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Bellemare, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Graves, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Riedmiller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Fidjeland, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Ostrovski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=', “Human-level control through deep reinforcement learning,” nature, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 518, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 7540, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 529–533, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [42] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Xuan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Lesser, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zilberstein, “Communication decisions in multi-agent cooperation: Model and experiments,” in Proceedings of the Fifth International Conference on Autonomous Agents, ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' AGENTS ’01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' New York, NY, USA: Association for Computing Machinery, 2001, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' 616–623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='1145/375735.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content='376469 [43] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Amato, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Dibangoye, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} +page_content=' Zilberstein, “Incremental policy generation for finite-horizon dec-pomdps,” in Nineteenth International Conference on Automated Planning and Scheduling, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndAzT4oBgHgl3EQfqf0O/content/2301.01628v1.pdf'} diff --git a/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4aacffdcfea7012d2f6905f240964c36142b582 --- /dev/null +++ b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/2301.04794v1.pdf.txt @@ -0,0 +1,1134 @@ +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights +Sharing for Recurrent Neural Networks +Nelly Elsayed1*, Zag ElSayed1 and Anthony S. Maida2 +1*School of Information Tecchnology, University of Cincinnati, +2610 University Cir, Cincinnati, 45221, Ohio, United States. +2School of Computing and Informatics, University of Louisiana at +Lafayette, 301 E. Lewis Street, Lafayette, 70503, Louisiana, +United States. +*Corresponding author(s). E-mail(s): elsayeny@ucmail.uc.edu; +Contributing authors: elsayezs@ucmail.uc.edu; +maida@louisiana.edu; +Abstract +Long short-term memory (LSTM) is one of the robust recurrent neural +network architectures for learning sequential data. However, it requires +considerable computational power to learn and implement both software +and hardware aspects. This paper proposed a novel LiteLSTM archi- +tecture based on reducing the LSTM computation components via the +weights sharing concept to reduce the overall architecture computation +cost and maintain the architecture performance. The proposed LiteL- +STM can be significant for processing large data where time-consuming +is crucial while hardware resources are limited, such as the security +of IoT devices and medical data processing. The proposed model was +evaluated and tested empirically on three different datasets from the +computer vision, cybersecurity, speech emotion recognition domains. The +proposed LiteLSTM has comparable accuracy to the other state-of-the- +art recurrent architecture while using a smaller computation budget. +Keywords: LiteLSTM, weights sharing, LSTM, recurrent neural networks, +IoT, MNIST +1 +arXiv:2301.04794v1 [cs.LG] 12 Jan 2023 + +Springer Nature 2021 LATEX template +2 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +1 Introduction +Sequential data modeling such as text, univariate and multivariate time series, +audio signals, biological signals, spatiotemporal sequences (videos), amino acid +amd genetic sequences requires an apparatus that can recognize the temporal +dependencies and relationships within the sequential data. In the early 1980s, +the recurrent neural network (RNN) was designed as the first neural network +approach that targeted sequential data problems [1–3]. The RNN architec- +ture can capture temporal dependencies due to the sense that it recursively +integrates the current new input into its self-previous output [4]. Since it has +an unrestricted but fading memory for the past, it can employ the tempo- +ral dependencies to influence the learning of the structure within the data +sequences [5]. The RNN has been applied in different research areas such as +handwriting recognition [4, 6, 7], speech recognition [8–10], language model- +ing [11–13], machine translation [14–16], action recognition [17–19], accident +recognition [20–22], stock prediction [23–25], video classification [26, 27], intru- +sion detection systems [28], time series prediction [29], and mental disorder +prediction [30]. +However, the RNN has a significant weakness: its ability to learn long- +term dependencies is limited due to the vanishing/exploding gradient problem. +There are several attempts to solve the RNN major design problem and +enhance its overall performance, as the RNN loses the ability to learn when +the error gradient is corrupted. To solve the vanishing/exploding gradient, +extensions to the RNN architecture require adding an internal state (memory) +that enforces a constant error flow through the RNN architecture stage. This +constant error flow enhances the robustness of the error gradient over longer +time scales. In addition, a gated control over the content of this internal state +(memory) is also needed [31]. +Nevertheless, this early LSTM model had significant weaknesses. When +it was early designed by Hochreiter and Schmidhuber [31], the LSTM model +input data was assumed to be prior segmented into subsequences with +explicitly marked ends that the memory could reset between each irrever- +ent subsequences processing [31, 32]. Moreover, this LSTM architecture did +not have an internal reset component in case of processing continual input +streams. Therefore, when the LSTM processes continuous input streams, the +state action may grow infinitely and ultimately cause the LSTM architecture +to fail [32]. +In 2000, [32] proposed a solution for the original LSTM problem that +was proposed in [31]. [32] added a forget gate beside the input and output +gates into the LSTM architecture that resets the LSTM memory when the +input is diversely different from the memory content and helps to remove the +unnecessary information that the LSTM memory carries through the time. +This LSTM approach [32] is widely used to solve various problems such as +speech recognition [8, 33–36], language modeling [13, 37–39], machine transla- +tion [16, 40–42], time series classification [43, 44], image segmentation [45–47], +and video prediction [40]. + +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +3 +However, this model also has pivotal weaknesses. First, the architecture +does not have a direct connection from the memory state to the forget, input, +and output gates. Hence, there is no control from the memory to the gates that +could assist in preventing the gradient from vanishing or exploding. Second, +the Constant Error Carousel (CEC) does not have influential conduct over the +forget and input gates when the output gate is closed (i.e. the output gate +produces zero value output), which could negatively affect the model due to +the lack of primary information flow within the model [48, 49]. +To handle these problems in the standard LSTM, in 2002, [48] added the +peephole connections from the memory state cell to each of the LSTM forget, +input, and output gates. The peephole connections allowed the memory state +to exert some control over the gates, reinforcing the LSTM architecture and +preventing the lack of information flow through the model during the situation +that leads to the output gate being closed [48]. +The peephole added a generalization element to the standard LSTM [50]. +However, the major weakness of this architecture is that it becomes cost expen- +sive due to the significant increase in the number of trainable parameters, +memory, processing, and storage requirements to train the model and save the +trained weights of the model and training time. +However, there is still growing interest in studying and applying the LSTM +architecture to solve various sequential problems in different research domains +due to the LSTM outperforming the GRU in several tasks when problems +have large training datasets [51]. Moreover, Greff et al. [51] proposed research +in 2017 showed that the LSTM exceeds the GRU performance in language +modeling-related tasks. On the other hand, in some problems where the train- +ing datasets are small, the GRU outperforms the LSTM using a smaller +computation budget [52]. +As the era of big data requires robust tools to manipulate large data pro- +cessing. In addition, it requires accelerated, time-consuming tools to process +the data. Moreover, as the world tries to reduce the Carbon (CO2) foot- +print [53] by reducing the usage of high-performance hardware [54–57], the +LSTM implementation requirements cost is considered one of the significant +LSTM drawbacks. +Spatiotemporal prediction problems are challenging to solve, utilizing only +a gated recurrent architecture. Implementing such models is quite expen- +sive from both resources and value aspects as a large number of parameters, +rapid processors, large processing memory, and memory storage are needed. +In addition, such models demand considerable time to train, validate and test. +Moreover, implementing such a model for real-time training is a challenge. +This paper attempts to evolve several computational aspects into a sophis- +ticated performance level. This paper proposed a novel recurrent gated +architecture using one gate: Lite Long Short-Term Memory (LiteLSTM). The +proposed LiteLSTM employed the concept of sharing weight among the gates +introduced in the GRU [52] to reduce the model computation budget. Also, +it employs memory control over the gate using the peephole connection over + +Springer Nature 2021 LATEX template +4 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Fig. 1 The RNN basic architecture and its corresponding unfolded in time representa- +tion [61]. +the one gate. Beside Compared to the LSTM, Peephole LSTM, and GRU, +the LiteLSTM has a smaller computation budget and implementation require- +ments, maintaining comparable accuracy. Due to its smaller computation +budget, the LiteLSTM has a significant training time reduction compared to +the LSTM. That allows the LiteLSTM to be implemented without a CO2 +footprint requirement. +This paper is organized as follows: Section 2 provides a brief overview of +the RNN, standard LSTM, peephole LSTM, and GRU architectures. Section 3 +provides the LiteLSTM architecture design concept details, Section 4 shows +empirical results for LiteLSTM implementation on three applications from +three different research domains: computer vision (using MNIST [58], cyberse- +curity anomaly detection in IoT (IEEE IoT Network Intrusion Dataset) [59], +and speech emotion recognition (TESS dataset [60]). +2 Recurrent Neural Networks +2.1 Basic RNN Architecture +The recurrent neural network (RNN) basic architecture is shown in Figure 1. +The left diagram shows the RNN architecture. The unfolded (unrolled) in time +RNN representation is shown in the right diagram starting from the time step +0 to time step t. The RNN is transformed into a feedforward network that +can be trained by backpropagation. This algorithm is called backpropagation +through time (BPTT) [62]. The RNN feeds its previous output vector h(t−1) at +time step t − 1vand the current input vector x(t) to calculate the RNN output +h(t) at the current time step t. This method allows the RNN to identify and +utilize temporal information to influence learning in the data sequences. +The basic RNN suffers from the vanishing/exploding gradient problem [63], +limiting the model’s ability to learn long-term dependencies within the sequen- +tial data. This is because the RNN does not have any element in its architecture +design components that could maintain a constant error flow through the recur- +rent model. The principle of adding gates as supporting components into the +recurrent architecture was proposed to solve this problem. + +乡 +tanh +tanh +tanh +tanh +tanh +↑ ++t +X +X ++tSpringer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +5 +Fig. 2 The standard LSTM unrolled architecture. +At a given discrete time step t, the RNN output is calculated as follows: +h(t) = tanh(Wx(t) + Uh(t−1) + b) +(1) +where x(t) is the RNN input at time step t. The h(t) and h(t−1) are the RNN +outputs at time steps t and t − 1. The feedforward and recurrent weights are +represented by W and U, respectively. The weights are shared across time +steps. b is the RNN model bias. +2.2 Standard Long Short-Term Memory (LSTM) +Gers et al. [32] proposed the standard LSTM architecture in 2000 as an +improved version of the first LSTM architecture, which was proposed in 1997 +by Hochreiter et al. [31]. This standard LSTM aimed to solve the continuous +input stream problem, which allowed the memory state cell values to grow in +an unbounded fashion, causing saturation of the output squashing (activation) +function. Gers et al. [32] proposed to add an additional gate to the LSTM archi- +tecture: forget gate f to reset the LSTM memory when the input is diversely +different from the memory content and serves to remove the unnecessarily +information that the LSTM memory holds through time. +Figure 2 shows the standard LSTM unfolded architecture where c(t), h(t) +are the memory state cell and LSTM output at time t, respectively. The symbol +⊙ denotes the element-wise (Hadamard) multiplication [32, 64] and σ denotes +the logistic sigmoid function. bi, bg, bf, and bo are the biases of each gate. W’s +are the feedforward weights and U’s are the recurrent weights. +The value of each component in the standard LSTM is calculated as follows: +i(t) = σ(Wxix(t) + Uhih(t−1) + bi) +(2) +g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) +(3) +f (t) = σ(Wxfx(t) + Uhfh(t−1) + bf) +(4) +o(t) = σ(Wxox(t) + Uhoh(t−1) + bo) +(5) + +c(t-1) +c(t) +tanh +h(t) +tanh ++ ++ ++ +h (t-1) +x(t)Springer Nature 2021 LATEX template +6 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Fig. 3 The standard LSTM unrolled architecture operation level that shows the compo- +nents and their corresponding weights. +c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) +(6) +h(t) = tanh(c(t)) ⊙ q(t) +(7) +where i(t), f (t), and o(t) are the input, forget, and output gates, respectively. +The gates are constrained to have activation values between zero and one to +indicate their status: open, closed, partially open, or partially closed. g(t), is +the input-update value. The model has two activation (squashing) units: input- +update and output activation where the hyperbolic tangent tanh activation +function is the preferable function to be used [65]. The memory cell state at +time t is c(t) and the output of the LSTM unit at time t is h(t). +Figure 3 shows the operation level of the standard LSTM where each com- +ponent of the standard LSTM and its corresponding weights are given. The +symbols × and ⊙ denote matrix multiplication and element-wise multiplica- +tion, respectively. +The standard LSTM architectue is widely used in various problem-solving +tasks and applications in different research fields. However, its architecture has +major drawbacks. First, there is no direct connection from the memory to the +gates which leads to the absence of CEC control over the gates [48]. Second, if +the output gate is closed, the CEC has no influence over the forget and input +gates which could impair the model due to the lack of primary information +flow within the model [48]. +2.3 The Peephole-Based LSTM +Gers et al. [48] proposed in 2002 a solution for the standard LSTM major prob- +lems. A new connection component has been added to the LSTM architecture +named the peephole connection, in which data flow connection from the mem- +ory state to each of the three LSTM gates to solve the standard LSTM main +problems. The peephole connections allow the memory state value to exert + +c(t-1) +c(t) +tanh +h (t) +tanh +X +X +X +bf +Wxf +x(t) Uhf +h(t-1) Wxg +h(t-1) Wxo +X(t) Uhi +x(t) Uho +h(t-1)Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +7 +Fig. 4 Gers et al. [48] proposed peephole-based LSTM unrolled architecture. +control over the LSTM three gates. This assists in preventing the vanishing +and/or exploding gradient problem that the standard LSTM could face. +Figure 5 shows the operation level of the peephole-based LSTM. The +equations to calculate the peephole LSTM are as follows: +i(t) = σ(Wxix(t) + Uhih(t−1) + Wsi ⊙ c(t−1) + bi) +(8) +g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) +(9) +f (t) = σ(Wxfx(t) + Uhfh(t−1) + Wsf ⊙ c(t−1) + bf) +(10) +o(t) = σ(Wxox(t) + Uhoh(t−1) + Wso ⊙ c(t−1) + bo) +(11) +c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) +(12) +h(t) = tanh(c(t)) ⊙ o(t) +(13) +where the symbol ⊙ denotes the elementwise (Hadamard) multiplication. Wci, +Wcf, and Wco are the peephole connections weights between the memory state +ct−1 and the input, forget, and output gates, respectively. +Adding the peephole connection to the standard LSTM made the LSTM +architecture a robust model to overcome the vanishing and/or exploding gra- +dient problem. However, it caused a significant increase in the number of +trainable parameters, training time, and memory requirements. +2.4 Gated Recurrent Unit (GRU) +The GRU model consists of two gates: the update gate z and the reset gate +r, whereas the LSTM consists of three gates: input, output, and forget gates. +In addition, the GRU does not contain the memory state cell that the LSTM +model includes. Therefore, the GRU architecture is smaller than the LSTM +by one gate and a memory state cell. The GRU integrates both the input gate +and forget gate of the LSTM model into one update gate z [51], introducing +the concept of the output of the same set of weights to reduce the model +architecture. The unfolded GRU block architecture is shown in Figure 6. + +c(t-1) +c(t) +tanh +h(t) +tanh ++ +h(t-1) +X(t)Springer Nature 2021 LATEX template +8 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Fig. 5 The operation level of the peephole-LSTM unrolled architecture where its compo- +nents and their corresponding weights are presented. +Fig. 6 The GRU unfolder architecture. +The reset gate functionality operates similarly to the output gate of the +LSTM. This GRU model eliminates the output squashing function, memory +unit, and the CEC. The GRU yields a reduction in trainable parameters com- +pared with the standard LSTM. However, this may lead to exploding and/or +vanishing gradients. +At time step, t, the GRU unit output, h(t), is calculated as follows [52]: +z(t) = σ(Wzxx(t) + Uzhh(t−1) + bz) +(14) +r(t) = σ(Wrxx(t) + Urhh(t−1) + br) +(15) +˜h(t) = tanh(Wx(t) + U(r(t) ⊙ h(t−1)) + b) +(16) +h(t) = (1 − z(t)) ⊙ h(t−1) + z(t) ⊙ ˜h(t) +(17) +where the Wxz, Wxr, and Wx are the feedforward weights of the update gate +z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively. + +c(t-1) +c(t) +tanh +h(t) +tanh +Wef +W. +bi +bf +00 +Wxf +Wxi +Uni +Ung +x(t) +h(t-1) +Wxo +h(t-1) +X(t) +Uno +h(t-1) +X(t)h(t-1) +tanh +h(t) +0 +1 +r(t) +h (t) ++(t)Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +9 +Fig. 7 The operation level of the GRU architecture showing the weights of each component. +Fig. 8 The LiteLSTM unrolled architecture. The single network gate (output indicated by +σ) sends information flow to three locations that correspond to the outputs of the forget, +input, and output gates of the standard LSTM. +The recurrent weights are Uhz, Uhr, Uh for the update gate z(t), the reset gate +r(t), and the output candidate activation ˜h(t), respectively. The biases of the +update gate, reset gate, and the output candidate is denoted by bz, br, and +b, respectively. σ is the logistic sigmoid function and tanh is the hyperbolic +tangent function. The elementwise (Hadamard) multiplication is denoted by +⊙. Figure 7 shows the operation level of the GRU architecture with weights +and biases made explicit. +3 LiteLSTM Architecture +The proposed LiteLSTM aims to: reduce the overall implementation cost of +the LSTM, solve the LSTM significant problems, and maintain a comparable +accuracy performance to the LSTM. The proposed LiteLSTM architecture +appears in Figure 8. + +Wx +Xt +1X +b +.t-1 +h +t-1 +tanh +Urh +ht +,X ++ +b, +1Z +t-1 +h +Xh (t) +(a) +tanh +0 +0 +tanh +t-1 +h (t) ++(t)Springer Nature 2021 LATEX template +10 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Fig. 9 The operation level of the LiteLSTM architecture showing the weights of each +component. +The architecture of the LiteLSTM consists of only one trainable gated unit. +We named the trainable gate the forget gate or network gate. This one gate +behaves as a shared set of weights among the three gates of the standard LSTM +gates. The LiteLSTM has a peephole connection from the memory state to the +forget gate, which preserves the memory state from the LSTM and keeps the +CEC to avoid vanishing and/or exploding gradients. +Thus, the proposed LiteLSTM preserves the critical components of the +LSTM as stated by [51] while reducing much of the parameter redundancy +in the LSTM architecture. The LiteLSTM has a significant reduction in the +number of trainable parameters that are required to implement the model. +Therefore, the LiteLSTM reduced the training time, memory, and hard- +ware requirements compared to the standard LSTM, peephole-based LSTM, +and GRU architectures. Furthermore, the proposed LiteLSTM architecture +preserves comparable prediction accuracy results to the LSTM. Figure 9 +shows a detailed architecture of the unrolled (unfolded) LiteLSTM assuming +non-stacked input. +The LiteLSTM block architecture contains only one trainable gate that +compensates the elimination of the other two gates of the standard LSTM by +sharing its trainable weights. The LiteLSTM preserves the memory cell of the +standard LSTM to process long data sequences and maintains the CEC to +manage the vanishing/exploding gradient problem. +The LiteLSTM formulas are created as follows: During the forward pass +within the LiteLSTM at time step t the total input (inp), inp(t), to the single +forget gate f (t) is calculated by: +inp(t) = [Wfx, Ufh, Wfc] +� +x(t), h(t−1), c(t−1)� ++ bf +(18) +where inp(t) ∈ Rη×1, and η × 1 is the of input vector inp(t). x(t) is the input +at time t, x(t) ∈ Rη×1, h(t−1) is the output of the LiteLSTM architecture at +time t − 1, and the memory state cell at time t − 1 denoted by c(t−1). Both +h(t−1), c(t−1) ∈ Rη×1. Wfx, Ufh, and Wfc are the weight sets. All three weight + +c(t-1) +c(t) +tanh +h(t) +tanh +0 +Wef +X +Wxf +Uhf +h(t-1) +h(t-1) WSpringer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +11 +Fig. 10 The logistic sigmoid function curve. +Fig. 11 The hardSigmoid function curve. +sets Wfx, Ufh, and Wfc and biases bf are trainable. The square brackets +indicate stacking. We will let Wf = [Wfx, Ufh, Wfc]. In addition, we let If = +� +x(t), h(t−1), c(t−1)� +. +By applying a squashing function G to the net input as follows: +f (t) +gate = G(inp(t)). +(19) +Depending on the application, the squaching function G can be either the +logistic sigmoid (σ) or hard sigmoid (hardSig) [66]. The logistic sigmoid is +calculated by: +σ(x) = +ex +ex + 1 = +1 +1 + e−x , +(20) +where x is a real number, x ∈ (−∞, ∞), and σ(x) has the range of (0, 1). The +hard sigmoid (hardSig) is calculated by: +hardSig(x) = max(min(0.25x + 0.5, 1), 0) +(21) +Figure 10 and Figure 11 shows the logistic sigmoid (σ) function and hard +sigmoid (hardSig) function curves, respectively. The values of f t in Eqn. 19 +falls in the range (0, 1) or [0, 1], depending on using the logistic sigmoid (σ) or +hard sigmoid function, respectively [65, 67]. Assuming that case of selection + +Sigmoid(x) +0-5 +x +-2 +-1.5 +-1 +-0.5 +0 +0.5 +1.5 +1 +2 +-0:5 +-1-hardSig(x) +0-5 +x +-2 +-1.5 +-1 +-0!5 +0 +0.5 +1 +1.5 +2 +-0:5 +-1Springer Nature 2021 LATEX template +12 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Table 1 Computational components comparison between the proposed LiteLSTM and +the state-of-the-art recurrent architectures. +Comparison +RNN GRU LSTM pLSTM LiteLSTM +Number of gates +0 +2 +3 +3 +1 +Number of activations +1 +1 +2 +2 +2 +State memory cell +× +× +✓ +✓ +✓ +Peephole connection +× +× +× +✓ +✓ +Number of weight matrices +2 +6 +8 +11 +6 +Number of elementwise multiplication 2 +3 +3 +6 +3 +Number of bias vectors +1 +3 +4 +4 +2 +Sharing weights concept +× +✓ +× +× +✓ +the function as σ, the gate value f t is calculated by: +f (t) = σ(WfIf + bf). +(22) +Selecting the logistic sigmoid or hard sigmoid functions is mainly based on the +application. However, the hard sigmoid function is the preferred function to +be used in the LiteLSTM gate to prevent the network gate from being closed +(i.e., prevent the network gate from producing zero value output). The input +update (memory activation) equation is calculated by: +g(t) = tanh (WgIg + bg) +(23) +where Wg = [Wgx, Ugh], and Ig = +� +x(t), h(t−1)� +. The dimension in Wg is +matching the dimension of the Wf that maintains the dimension compatability +within the architecture design. +Finally, the Lite LSTM output is calculated by: +c(t) = f (t) ⊙ c(t−1) + f (t) ⊙ g(t) +(24) +h(t) = f (t) ⊙ tanh(c(t)) +(25) +Table 1 shows a comparison between the architecture design and computa- +tion components of the RNN, GRU, standard LSTM, peephole-based LSTM +(pLSTM), and the proposed LiteLSTM. +4 Emperical Evaluatuation and Analysis +In this paper, the LiteLSTM has been empirically tested and evaluated in +three research domains: computer vision, anomaly detection in IoT, and speech +emotion recognition. The MNIST [58] has been used as the computer vision +experiment dataset, and the IEEE IoT Network Intrusion Dataset [59] is used +for anomaly detection in IoT tasks. We used an Intel(R) Core(YM) i7-9700 +CPU @3.00GHZ, 3000 Mhz processor, Microsoft Windows 10 OS, and 32 GB +memory computer machine to perform our experiments. We used Python 3.7.6, +Keras 2.0.4, and Tensorflow 1.15.0. + +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +13 +Fig. 12 +The accuracy diagrams of the recurrent architectures and LiteLSTM using MNIST +dataset. +Table 2 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent +architectures using MNIST dataset +Comparision +RNN +GRU +LSTM pLSTM LiteLSTM +Time(m) +11.24 +43.01 +60.36 +75.45 +42.94 +Parameters +792,210 812,610 822,810 833,010 +812,610 +Accuracy(%) 67.64% +94.09% +95.70% +95.99% +96.07% +The first empirical evaluation of the LiteLSTM was performed using the +MNIST dataset, which consists of 70, 000 images of handwritten digits between +0 and 9. The dataset is split into 60, 000 data samples for training and 10, 000 +data samples for testing [68]. The MNIST images were centered in a 28×28 +image by computing the center of mass of the pixels. The model set 64-two +layered architecture followed by a Softmax layer. For the training process, the +batch size was set to 128 and the number of epochs to 20. The Adam optimizer +with learning rate 10−3, β1 = 0.9, β2 = 0.999, and ϵ = 1e − 07. Table 2 shows +the accuracy results of the different recurrent architectures and the LiteLSTM, +where the time is measured in minutes. The RNN shows a significantly shorter +training time. However, it has the lowest performance compared to the other +recurrent architectures. The LiteLSTM shows an improvement in accuracy +compared to the other recurrent architectures. Figure 12 shows the accuracy +plots for each of the LiteLSTM and the state-of-the-art recurrent models. +The second empirical evaluation of the LiteLSTM was performed using the +IEEE IoT Network Intrusion Dataset. The dataset consists of 42 raw network +packet files (pcap) at different time points. The IoT devices, namely SKT +NUGU (NU 100) and EZVIZ Wi-Fi camera (C2C Mini O Plus 1080P) were +used to generate traffic for IoT devices. The data contains normal traffic flow +and different types of cyberattacks, namely: ARP spoofing attack, DoS (SYN +flooding) attack, scan (host and port scan) attack, scan(port and OS scan) +attack, (UDP/ACK/HTTP Flooding) of zombie PC compromised by Mirai +malware, Mirai-ACK flooding attack, Mirai-HTTP flooding attack, and Telnet +brute-force attack. In our experiments, we used a dataset to experiment with +the LiteLSTM twice: first, to detect whether an attack occurred or not (as a +binary dataset), and another experiment to detect the type of attack. We set +the batch size to 32 and the number of epochs to 20. Table 3 shows the binary +experimental results for the LiteLSTM and the recurrent architectures. Table 4 + +(a) RNN accuracy +(b) LSTM accuracy +(c) GRU accuracy +(d) LiteLSTM accuracy +19 +09 +19 +cs +s +train +train + train +train +validation +validation +validaticn +validation +epoch +epoch +epoch +epochSpringer Nature 2021 LATEX template +14 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +Table 3 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent +architectures using IEEE IoT Network Intrusion Binary Dataset +Comparison +RNN GRU +LSTM pLSTM LiteLSTM +Time (m) +20.26 43.27 +41.51 +51.21 +28.44 +Precision +0.8144 0.9328 +0.9422 +0.9653 +0.9382 +Recall +0.9763 0.9757 +0.9484 +0.9545 +0.9834 +F1-score +88.80 +91.34 +95.97 +95.99 +0.9603 +Accuracy(%) 98.7% +99.51% 99.50% 99.56% +99.60% +Table 4 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent +architectures using IEEE IoT Network Intrusion Detection for Multiple Classes +Cyberattacks Dataset. +Comparison +RNN +GRU +LSTM pLSTM LiteLSTM +Time (m) +19.98 +42.79 +50.41 +59.96 +29.31 +Precision +0.8875 +0.8991 +0.9461 +0.9249 +0.8999 +Recall +0.8418 +0.8300 +0.7898 +0.8086 +0.8318 +F1-score +0.8640 +0.8632 +0.8609 +0.8628 +0.8645 +Accuracy(%) 83.35% 86.70% 86.90% 87.03% +87.10% +Fig. 13 The accuracy diagrams of the recurrent architectures and LiteLSTM using Toronto +Emotion Speech Set (TESS) dataset. +shows the detection results of the LiteLSTM and the recurrent architectures +for detecting different types of cyberattacks. +The third empirical evaluation of the LiteLSTM was performed on a voice +(audio) emotion recognition task. For this purpose, we used the Toronto Emo- +tional Speech Set (TESS) [60], which is one of the emotion recognition dataset +benchmarks that has been used in several emotion recognition applications +and tasks [69–71]. This dataset consists of 2800 stimuli and has seven different +emotion categories: anger, disgust, fear, happiness, pleasant/surprise, sadness, +and neutral. The major significance of this dataset is that the distribution +between the number of stimuli per emotion category is equally likely [60]. Sim- +ilar to the previous experiments, we tested the proposed LiteLSTM with the +other recurrent neural network architectures. For this empirical evaluation, we +used the model described [69], which used the GRU as the learning model. We +replaced the GRU with LiteLSTM, peephole LSTM, and RNN and evaluated +the model performance each time. The dataset has been split into training, +testing, and validation sets with a ratio of 70%, 20%, and 10%, respectively. + +(a) RNN accuracy +(b) GRU accuracy +(c) LSTM accuracy +(d) pLSTM accuracy +(e) LiteLSTM accuracy +10 - +LC +10 - +1.0 +60 +0.9 +80 +0.8 +0.8 +L'0 +0.7 +0.7 +0.3 +0.3 +E0 +一 train +0.2 +train + 0.2 +train +0.2 + train + train +validation +0.1 +validation +0.1 + validation +01 +validation +validation +0.0 +0.0 1 +0 +10 +20 +10 +0 +10 +epoch +epoch +0 +epoch +epochSpringer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +15 +Table 5 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent +architectures using the Toronto Emotional Speech Set (TESS). +Comparison +RNN +GRU +LSTM +pLSTM LiteLSTM +Time (m) +79.56 +171.16 +201.64 +239.84 +117.24 +Precision +0.9312 +0.9428 +0.9686 +0.9898 +0.9799 +Recall +0.9546 +0.9429 +0.9026 +0.9214 +0.9446 +F1-score +0.9427 +0.9428 +0.9344 +0.9543 +0.9619 +Accuracy(%) 92.163% 94.285% 95.147% 95.534% +95.989% +Table 5 shows the empirical result of the proposed LiteLSTM and the recur- +rent architectures for emotion recognition from speech. Figure 13 shows the +training versus validation accuracies for each of the recurrent architectures and +LiteLSTM using Toronto Emotion Speech Set (TESS) dataset. +5 Conclusion +The proposed LiteLSTM architecture novelty lies in the following aspects. +First, the LiteLSTM consists of one gate that serves as a multifunctional +gate via the weights-sharing concept. Thus, the overall number of train- +ing parameters is reduced by approximately one-third of the LSTM or the +peephole-LSTM. In addition, maintaining the peephole connection from the +memory state cell to the existing gate maintains the control of the memory over +the gate in contrast to the LSTM. Therefore, the LiteLSTM handles the van- +ishing/exploding gradient problem.The overall budget for implementing the +LiteLSTM, including the training time, memory footprint, memory storage, +and processing power, is smaller than the LSTM by approximately one-third. +We empirically evaluated the LiteLSTM using three datasets: MNIST, IEEE +IoT Network Intrusion Detection datasets, and TESS speech emotion recog- +nition dataset. The proposed LiteLSTM shows comparable results to the +LSTM using a smaller computation budget. Due to the optimized LiteLSTM +architecture design, we were able to complete the empirical tasks using a +computer processor without involving the GPU in the computational process. +Thus, the LiteLSTM architecture helps to reduce the CO2 footprint. The pro- +posed LiteLSTM architecture is an attractive candidate for future hardware +implementation on small and portable devices, especially IoT devices. +Statements and Declarations +• Funding: N/A +• Conflict of interest/Competing interests: The authors declare that +they have no conflict of interest. +• The authors did not receive support from any organization for the submitted +work. +• All authors certify that they have no affiliations with or involvement in any +organization or entity with any financial interest or non-financial interest in +the subject matter or materials discussed in this manuscript. + +Springer Nature 2021 LATEX template +16 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +• The authors have no financial or proprietary interests in any material +discussed in this article. +References +[1] Bourlard, H., Wellekens, C.J.: Speech dynamics and recurrent neural +networks. In: International Conference on Acoustics, Speech, and Signal +Processing,, pp. 33–36 (1989). IEEE +[2] Siegelmann, H.T.: Recurrent neural networks. Computer Science Today, +29–45 (1995) +[3] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning (2016). http:// +www.deeplearningbook.org +[4] Graves, A., Liwicki, M., Fern´andez, S., Bertolami, R., Bunke, H., Schmid- +huber, J.: A novel connectionist system for unconstrained handwriting +recognition. IEEE Transactions on Pattern Analysis and Machine Intelli- +gence 31(5), 855–868 (2009) +[5] Elsayed, N.: Gated convolutional recurrent neural networks for predictive +coding (2019) +[6] Stuner, B., Chatelain, C., Paquet, T.: Handwriting recognition using +cohort of lstm and lexicon verification with extremely large lexicon. +Multimedia Tools and Applications 79(45), 34407–34427 (2020) +[7] Carbune, V., Gonnet, P., Deselaers, T., Rowley, H.A., Daryin, A., Calvo, +M., Wang, L.-L., Keysers, D., Feuz, S., Gervais, P.: Fast multi-language +lstm-based online handwriting recognition. International Journal on Doc- +ument Analysis and Recognition (IJDAR) 23(2), 89–102 (2020) +[8] Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent +neural network architectures for large scale acoustic modeling. In: Fif- +teenth Annual Conference of the International Speech Communication +Association (2014) +[9] Graves, A., Mohamed, A.-r., Hinton, G.E.: Speech recognition with +deep recurrent neural networks. 2013 IEEE International Conference on +Acoustics, Speech and Signal Processing, 6645–6649 (2013) +[10] Zeyer, A., Doetsch, P., Voigtlaender, P., Schl¨uter, R., Ney, H.: A com- +prehensive study of deep bidirectional lstm rnns for acoustic modeling in +speech recognition. In: 2017 IEEE International Conference on Acoustics, +Speech and Signal Processing (ICASSP), pp. 2462–2466 (2017). IEEE +[11] Mikolov, T., Karafi´at, M., Burget, L., ˇCernock`y, J., Khudanpur, S.: + +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +17 +Recurrent neural network based language model. In: Eleventh Annual +Conference of the International Speech Communication Association +(2010) +[12] Mikolov, T., Kombrink, S., Burget, L., ˇCernock`y, J., Khudanpur, S.: +Extensions of recurrent neural network language model. In: Acous- +tics, Speech and Signal Processing (ICASSP), 2011 IEEE International +Conference On, pp. 5528–5531 (2011). IEEE +[13] Sundermeyer, M., Schl¨uter, R., Ney, H.: Lstm neural networks for lan- +guage modeling. In: Thirteenth Annual Conference of the International +Speech Communication Association (2012) +[14] Ren, B.: The use of machine translation algorithm based on residual and +lstm neural network in translation teaching. Plos one 15(11), 0240663 +(2020) +[15] Bridle, J.S.: Alpha-nets: A recurrent ‘neural’network architecture with a +hidden markov model interpretation. Speech Communication 9(1), 83–92 +(1990) +[16] Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly +learning to align and translate. arXiv preprint arXiv:1409.0473 (2014) +[17] Du, Y., Wang, W., Wang, L.: Hierarchical recurrent neural network for +skeleton based action recognition. In: Proceedings of the IEEE Conference +on Computer Vision and Pattern Recognition, pp. 1110–1118 (2015) +[18] Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action +recognition in video sequences using deep bi-directional lstm with cnn +features. IEEE access 6, 1155–1166 (2017) +[19] Adewopo, V., Elsayed, N., Anderson, K.: Baby physical safety moni- +toring in smart home using action recognition system. arXiv preprint +arXiv:2210.12527 (2022) +[20] Bortnikov, M., Khan, A., Khattak, A.M., Ahmad, M.: Accident recog- +nition via 3d cnns for automated traffic monitoring in smart cities. In: +Science and Information Conference, pp. 256–264 (2019). Springer +[21] Adewopo, V., Elsayed, N., ElSayed, Z., Ozer, M., Abdelgawad, A., Bay- +oumi, M.: Review on action recognition for accident detection in smart +city transportation systems. arXiv preprint arXiv:2208.09588 (2022) +[22] Fatima, M., Khan, M.U.K., Kyung, C.-M.: Global feature aggregation for +accident anticipation. In: 2020 25th International Conference on Pattern +Recognition (ICPR), pp. 2809–2816 (2021). IEEE + +Springer Nature 2021 LATEX template +18 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +[23] Kamijo, K.-i., Tanigawa, T.: Stock price pattern recognition-a recur- +rent neural network approach. In: Neural Networks, 1990., 1990 IJCNN +International Joint Conference On, pp. 215–221 (1990). IEEE +[24] Elsayed, N., Zaghloul, Z.S., Azumah, S.W., Li, C.: Intrusion detection +system in smart home network using bidirectional lstm and convolu- +tional neural networks hybrid model. In: 2021 IEEE International Midwest +Symposium on Circuits and Systems (MWSCAS), pp. 55–58 (2021). +IEEE +[25] Azumah, S.W., Elsayed, N., Adewopo, V., Zaghloul, Z.S., Li, C.: A deep +lstm based approach for intrusion detection iot devices network in smart +home. In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), +pp. 836–841 (2021). IEEE +[26] Yang, Y., Krompass, D., Tresp, V.: Tensor-train recurrent neural networks +for video classification. In: International Conference on Machine Learning, +pp. 3891–3900 (2017). PMLR +[27] Ogawa, T., Sasaka, Y., Maeda, K., Haseyama, M.: Favorite video +classification based on multimodal bidirectional lstm. IEEE Access 6, +61401–61409 (2018) +[28] Debar, H., Dorizzi, B.: An application of a recurrent network to an intru- +sion detection system. In: [Proceedings 1992] IJCNN International Joint +Conference on Neural Networks, vol. 2, pp. 478–483 (1992). IEEE +[29] Han, M., Xi, J., Xu, S., Yin, F.-L.: Prediction of chaotic time series based +on the recurrent predictor neural network. IEEE Transactions on Signal +Processing 52(12), 3409–3416 (2004) +[30] Petrosian, A., Prokhorov, D., Lajara-Nanson, W., Schiffer, R.: Recurrent +neural network-based approach for early recognition of alzheimer’s disease +in EEG. Clinical Neurophysiology 112(8), 1378–1387 (2001) +[31] Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Com- +putation 9(8), 1735–1780 (1997) +[32] Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: Continual +prediction with LSTM. Neural Computation, 2451–2471 (2000) +[33] Soltau, H., Liao, H., Sak, H.: Neural speech recognizer: Acoustic-to-word +LSTM model for large vocabulary speech recognition. arXiv preprint +arXiv:1610.09975 (2016) +[34] Chorowski, J., Bahdanau, D., Cho, K., Bengio, Y.: End-to-end continuous +speech recognition using attention-based recurrent NN: first results. arXiv + +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +19 +preprint arXiv:1412.1602 (2014) +[35] Miao, Y., Gowayyed, M., Metze, F.: EESEN: End-to-end speech recogni- +tion using deep RNN models and WFST-based decoding. In: Automatic +Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop +On, pp. 167–174 (2015). IEEE +[36] Graves, A., Jaitly, N., Mohamed, A.-r.: Hybrid speech recognition with +deep bidirectional LSTM. In: Automatic Speech Recognition and Under- +standing (ASRU), 2013 IEEE Workshop On, pp. 273–278 (2013). IEEE +[37] Merity, S., Keskar, N.S., Socher, R.: Regularizing and optimizing LSTM +language models. arXiv preprint arXiv:1708.02182 (2017) +[38] Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with +neural networks. In: Advances in Neural Information Processing Systems, +pp. 3104–3112 (2014) +[39] Miyamoto, Y., Cho, K.: Gated word-character recurrent language model. +arXiv preprint arXiv:1606.01700 (2016) +[40] Cho, K., Van Merri¨enboer, B., Bahdanau, D., Bengio, Y.: On the prop- +erties of neural machine translation: Encoder-decoder approaches. arXiv +preprint arXiv:1409.1259 (2014) +[41] Luong, M.-T., Sutskever, I., Le, Q.V., Vinyals, O., Zaremba, W.: Address- +ing the rare word problem in neural machine translation. arXiv preprint +arXiv:1410.8206 (2014) +[42] Luong, M.-T., Manning, C.D.: Stanford neural machine translation sys- +tems for spoken language domains. In: Proceedings of the International +Workshop on Spoken Language Translation, pp. 76–79 (2015) +[43] Karim, F., Majumdar, S., Darabi, H., Chen, S.: LSTM fully convolutional +networks for time series classification. IEEE Access 6, 1662–1669 (2018) +[44] Karim, F., Majumdar, S., Darabi, H., Harford, S.: Multivariate LSTM- +FCNs for time series classification. arXiv preprint arXiv:1801.04503 +(2018) +[45] Stollenga, M.F., Byeon, W., Liwicki, M., Schmidhuber, J.: Parallel multi- +dimensional LSTM, with application to fast biomedical volumetric image +segmentation. In: Advances in Neural Information Processing Systems, +pp. 2998–3006 (2015) +[46] Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: +Deeplab: Semantic image segmentation with deep convolutional nets, + +Springer Nature 2021 LATEX template +20 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +atrous convolution, and fully connected crfs. IEEE Transactions on +Pattern Analysis and Machine Intelligence 40(4), 834–848 (2018) +[47] Reiter, S., Schuller, B., Rigoll, G.: A combined LSTM-RNN-HMM- +approach for meeting event segmentation and recognition. In: Acoustics, +Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 +IEEE International Conference On, vol. 2, p. (2006). IEEE +[48] Gers, F.A., Schraudolph, N.N., Schmidhuber, J.: Learning precise timing +with LSTM recurrent networks. Journal of Machine Learning Research 3, +115–143 (2002) +[49] Gers, F.A., Schmidhuber, J.: Recurrent nets that time and count. In: +Proceedings of the IEEE-INNS-ENNS International Joint Conference on +Neural Networks. IJCNN 2000. Neural Computing: New Challenges and +Perspectives for the New Millennium, vol. 3, pp. 189–194 (2000). IEEE +[50] Elsayed, N., Maida, A.S., Bayoumi, M.: Reduced-gate convolutional long +short-term memory using predictive coding for spatiotemporal prediction. +Computational Intelligence 36(3), 910–939 (2020) +[51] Greff, K., Srivastava, R.K., Koutn´ık, J., Steunebrink, B.R., Schmidhuber, +J.: LSTM: A search space odyssey. IEEE Transactions on Neural Networks +and Learning Systems 28(10), 2222–2232 (2017) +[52] Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of +gated recurrent neural networks on sequence modeling. arXiv preprint +arXiv:1412.3555 (2014) +[53] Bocken, N.M., Allwood, J.M.: Strategies to reduce the carbon foot- +print of consumer goods by influencing stakeholders. Journal of Cleaner +Production 35, 118–129 (2012) +[54] Calza, F., Parmentola, A., Tutore, I.: Types of green innovations: Ways of +implementation in a non-green industry. Sustainability 9(8), 1301 (2017) +[55] Zaghloul, Z.S., Elsayed, N., Li, C., Bayoumi, M.: Green iot system archi- +tecture for applied autonomous network cybersecurity monitoring. In: +2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp. 628–632 +(2021). IEEE +[56] Al Haddad, M., ElSayed, Z., Bayoumi, M.: Green arithmetic logic unit. +In: 2012 International Conference on Energy Aware Computing, pp. 1–4 +(2012). IEEE +[57] ElSayed, Z., Elsayed, N., Li, C., Bayoumi, M.: Autonomous low power +iot system architecture for cybersecurity monitoring. arXiv e-prints, 2106 + +Springer Nature 2021 LATEX template +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +21 +(2021) +[58] LeCun, Y.: The mnist database of handwritten digits. http://yann. lecun. +com/exdb/mnist/ (1998) +[59] Kang, H., Ahn, D.H., Lee, G.M., Yoo, J.D., Park, K.H., Kim, H.K.: IoT +Network Intrusion Dataset. https://doi.org/10.21227/q70p-q449. https: +//dx.doi.org/10.21227/q70p-q449 +[60] Dupuis, K., Pichora-Fuller, M.K.: Toronto emotional speech set (TESS)- +younger talker happy (2010) +[61] Olah, +C.: +Understanding +LSTM +Networks. +http://colah.github.io/posts/2015-08-Understanding-LSTMs/ (2015) +[62] Werbos, P.J.: Backpropagation through time: what it does and how to do +it. Proceedings of the IEEE 78(10), 1550–1560 (1990) +[63] Ceni, A., Ashwin, P., Livi, L.: Interpreting RNN behaviour via excitable +network attractors (1807) +[64] Elsayed, N., Maida, A.S., Bayoumi, M.: Reduced-gate convolutional lstm +architecture for next-frame video prediction using predictive coding. In: +2019 International Joint Conference on Neural Networks (ijcnn), pp. 1–9 +(2019). IEEE +[65] Elsayed, N., Maida, A.S., Bayoumi, M.: Empirical activation function +effects on unsupervised convolutional lstm learning. In: 2018 IEEE 30th +International Conference on Tools with Artificial Intelligence (ICTAI), +pp. 336–343 (2018). IEEE +[66] Gulcehre, C., Moczulski, M., Denil, M., Bengio, Y.: Noisy activation func- +tions. In: International Conference on Machine Learning, pp. 3059–3068 +(2016) +[67] Elsayed, N., Maida, A., Bayoumi, M.: Effects of different activation +functions for unsupervised convolutional lstm spatiotemporal learning. +Advances in Science, Technology and Engineering Systems Journal 4(2), +260–269 (2019) +[68] Elsayed, N., ElSayed, Z., Maida, A.S.: Litelstm architecture for deep +recurrent neural networks. arXiv preprint arXiv:2201.11624 (2022) +[69] Elsayed, N., ElSayed, Z., Asadizanjani, N., Ozer, M., Abdelgawad, A., +Bayoumi, M.: Speech emotion recognition using supervised deep recurrent +system for mental health monitoring. arXiv preprint arXiv:2208.12812 +(2022) + +Springer Nature 2021 LATEX template +22 +LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks +[70] Gokilavani, M., Katakam, H., Basheer, S.A., Srinivas, P.: Ravdness, +crema-d, tess based algorithm for emotion recognition using speech. +In: 2022 4th International Conference on Smart Systems and Inventive +Technology (ICSSIT), pp. 1625–1631 (2022). IEEE +[71] Parry, J., Palaz, D., Clarke, G., Lecomte, P., Mead, R., Berger, M., Hofer, +G.: Analysis of deep learning architectures for cross-corpus speech emotion +recognition. In: Interspeech, pp. 1656–1660 (2019) + diff --git a/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ba1955a302cd93d5377f4ebf63bd3f29642277c --- /dev/null +++ b/ndE3T4oBgHgl3EQf7Asa/content/tmp_files/load_file.txt @@ -0,0 +1,838 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf,len=837 +page_content='Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Nelly Elsayed1*, Zag ElSayed1 and Anthony S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Maida2 1*School of Information Tecchnology, University of Cincinnati, 2610 University Cir, Cincinnati, 45221, Ohio, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2School of Computing and Informatics, University of Louisiana at Lafayette, 301 E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Lewis Street, Lafayette, 70503, Louisiana, United States.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Corresponding author(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' E-mail(s): elsayeny@ucmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='uc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Contributing authors: elsayezs@ucmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='uc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' maida@louisiana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='edu;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Abstract Long short-term memory (LSTM) is one of the robust recurrent neural network architectures for learning sequential data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, it requires considerable computational power to learn and implement both software and hardware aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This paper proposed a novel LiteLSTM archi- tecture based on reducing the LSTM computation components via the weights sharing concept to reduce the overall architecture computation cost and maintain the architecture performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed LiteL- STM can be significant for processing large data where time-consuming is crucial while hardware resources are limited, such as the security of IoT devices and medical data processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed model was evaluated and tested empirically on three different datasets from the computer vision, cybersecurity, speech emotion recognition domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed LiteLSTM has comparable accuracy to the other state-of-the- art recurrent architecture while using a smaller computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Keywords: LiteLSTM, weights sharing, LSTM, recurrent neural networks, IoT, MNIST 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='04794v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='LG] 12 Jan 2023 Springer Nature 2021 LATEX template 2 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 1 Introduction Sequential data modeling such as text, univariate and multivariate time series, audio signals, biological signals, spatiotemporal sequences (videos), amino acid amd genetic sequences requires an apparatus that can recognize the temporal dependencies and relationships within the sequential data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In the early 1980s, the recurrent neural network (RNN) was designed as the first neural network approach that targeted sequential data problems [1–3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The RNN architec- ture can capture temporal dependencies due to the sense that it recursively integrates the current new input into its self-previous output [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Since it has an unrestricted but fading memory for the past, it can employ the tempo- ral dependencies to influence the learning of the structure within the data sequences [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The RNN has been applied in different research areas such as handwriting recognition [4, 6, 7], speech recognition [8–10], language model- ing [11–13], machine translation [14–16], action recognition [17–19], accident recognition [20–22], stock prediction [23–25], video classification [26, 27], intru- sion detection systems [28], time series prediction [29], and mental disorder prediction [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, the RNN has a significant weakness: its ability to learn long- term dependencies is limited due to the vanishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' There are several attempts to solve the RNN major design problem and enhance its overall performance, as the RNN loses the ability to learn when the error gradient is corrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' To solve the vanishing/exploding gradient, extensions to the RNN architecture require adding an internal state (memory) that enforces a constant error flow through the RNN architecture stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This constant error flow enhances the robustness of the error gradient over longer time scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, a gated control over the content of this internal state (memory) is also needed [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Nevertheless, this early LSTM model had significant weaknesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' When it was early designed by Hochreiter and Schmidhuber [31], the LSTM model input data was assumed to be prior segmented into subsequences with explicitly marked ends that the memory could reset between each irrever- ent subsequences processing [31, 32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Moreover, this LSTM architecture did not have an internal reset component in case of processing continual input streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Therefore, when the LSTM processes continuous input streams, the state action may grow infinitely and ultimately cause the LSTM architecture to fail [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In 2000, [32] proposed a solution for the original LSTM problem that was proposed in [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [32] added a forget gate beside the input and output gates into the LSTM architecture that resets the LSTM memory when the input is diversely different from the memory content and helps to remove the unnecessary information that the LSTM memory carries through the time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This LSTM approach [32] is widely used to solve various problems such as speech recognition [8, 33–36], language modeling [13, 37–39], machine transla- tion [16, 40–42], time series classification [43, 44], image segmentation [45–47], and video prediction [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 3 However, this model also has pivotal weaknesses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' First, the architecture does not have a direct connection from the memory state to the forget, input, and output gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Hence, there is no control from the memory to the gates that could assist in preventing the gradient from vanishing or exploding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Second, the Constant Error Carousel (CEC) does not have influential conduct over the forget and input gates when the output gate is closed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' the output gate produces zero value output), which could negatively affect the model due to the lack of primary information flow within the model [48, 49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' To handle these problems in the standard LSTM, in 2002, [48] added the peephole connections from the memory state cell to each of the LSTM forget, input, and output gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The peephole connections allowed the memory state to exert some control over the gates, reinforcing the LSTM architecture and preventing the lack of information flow through the model during the situation that leads to the output gate being closed [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The peephole added a generalization element to the standard LSTM [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, the major weakness of this architecture is that it becomes cost expen- sive due to the significant increase in the number of trainable parameters, memory, processing, and storage requirements to train the model and save the trained weights of the model and training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, there is still growing interest in studying and applying the LSTM architecture to solve various sequential problems in different research domains due to the LSTM outperforming the GRU in several tasks when problems have large training datasets [51].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Moreover, Greff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [51] proposed research in 2017 showed that the LSTM exceeds the GRU performance in language modeling-related tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' On the other hand, in some problems where the train- ing datasets are small, the GRU outperforms the LSTM using a smaller computation budget [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' As the era of big data requires robust tools to manipulate large data pro- cessing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, it requires accelerated, time-consuming tools to process the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Moreover, as the world tries to reduce the Carbon (CO2) foot- print [53] by reducing the usage of high-performance hardware [54–57], the LSTM implementation requirements cost is considered one of the significant LSTM drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Spatiotemporal prediction problems are challenging to solve, utilizing only a gated recurrent architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Implementing such models is quite expen- sive from both resources and value aspects as a large number of parameters, rapid processors, large processing memory, and memory storage are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, such models demand considerable time to train, validate and test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Moreover, implementing such a model for real-time training is a challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This paper attempts to evolve several computational aspects into a sophis- ticated performance level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This paper proposed a novel recurrent gated architecture using one gate: Lite Long Short-Term Memory (LiteLSTM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed LiteLSTM employed the concept of sharing weight among the gates introduced in the GRU [52] to reduce the model computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Also, it employs memory control over the gate using the peephole connection over Springer Nature 2021 LATEX template 4 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1 The RNN basic architecture and its corresponding unfolded in time representa- tion [61].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' the one gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Beside Compared to the LSTM, Peephole LSTM, and GRU, the LiteLSTM has a smaller computation budget and implementation require- ments, maintaining comparable accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Due to its smaller computation budget, the LiteLSTM has a significant training time reduction compared to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' That allows the LiteLSTM to be implemented without a CO2 footprint requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This paper is organized as follows: Section 2 provides a brief overview of the RNN, standard LSTM, peephole LSTM, and GRU architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Section 3 provides the LiteLSTM architecture design concept details, Section 4 shows empirical results for LiteLSTM implementation on three applications from three different research domains: computer vision (using MNIST [58], cyberse- curity anomaly detection in IoT (IEEE IoT Network Intrusion Dataset) [59], and speech emotion recognition (TESS dataset [60]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2 Recurrent Neural Networks 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='1 Basic RNN Architecture The recurrent neural network (RNN) basic architecture is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The left diagram shows the RNN architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The unfolded (unrolled) in time RNN representation is shown in the right diagram starting from the time step 0 to time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The RNN is transformed into a feedforward network that can be trained by backpropagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This algorithm is called backpropagation through time (BPTT) [62].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The RNN feeds its previous output vector h(t−1) at time step t − 1vand the current input vector x(t) to calculate the RNN output h(t) at the current time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This method allows the RNN to identify and utilize temporal information to influence learning in the data sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The basic RNN suffers from the vanishing/exploding gradient problem [63], limiting the model’s ability to learn long-term dependencies within the sequen- tial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This is because the RNN does not have any element in its architecture design components that could maintain a constant error flow through the recur- rent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The principle of adding gates as supporting components into the recurrent architecture was proposed to solve this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 乡 tanh tanh tanh tanh tanh ↑ +t X X +tSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2 The standard LSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' At a given discrete time step t, the RNN output is calculated as follows: h(t) = tanh(Wx(t) + Uh(t−1) + b) (1) where x(t) is the RNN input at time step t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The h(t) and h(t−1) are the RNN outputs at time steps t and t − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The feedforward and recurrent weights are represented by W and U, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The weights are shared across time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' b is the RNN model bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='2 Standard Long Short-Term Memory (LSTM) Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [32] proposed the standard LSTM architecture in 2000 as an improved version of the first LSTM architecture, which was proposed in 1997 by Hochreiter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This standard LSTM aimed to solve the continuous input stream problem, which allowed the memory state cell values to grow in an unbounded fashion, causing saturation of the output squashing (activation) function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [32] proposed to add an additional gate to the LSTM archi- tecture: forget gate f to reset the LSTM memory when the input is diversely different from the memory content and serves to remove the unnecessarily information that the LSTM memory holds through time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 2 shows the standard LSTM unfolded architecture where c(t), h(t) are the memory state cell and LSTM output at time t, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The symbol ⊙ denotes the element-wise (Hadamard) multiplication [32, 64] and σ denotes the logistic sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' bi, bg, bf, and bo are the biases of each gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' W’s are the feedforward weights and U’s are the recurrent weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The value of each component in the standard LSTM is calculated as follows: i(t) = σ(Wxix(t) + Uhih(t−1) + bi) (2) g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) (3) f (t) = σ(Wxfx(t) + Uhfh(t−1) + bf) (4) o(t) = σ(Wxox(t) + Uhoh(t−1) + bo) (5) c(t-1) c(t) tanh h(t) tanh + + + h (t-1) x(t)Springer Nature 2021 LATEX template 6 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3 The standard LSTM unrolled architecture operation level that shows the compo- nents and their corresponding weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) (6) h(t) = tanh(c(t)) ⊙ q(t) (7) where i(t), f (t), and o(t) are the input, forget, and output gates, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The gates are constrained to have activation values between zero and one to indicate their status: open, closed, partially open, or partially closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' g(t), is the input-update value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The model has two activation (squashing) units: input- update and output activation where the hyperbolic tangent tanh activation function is the preferable function to be used [65].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The memory cell state at time t is c(t) and the output of the LSTM unit at time t is h(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 3 shows the operation level of the standard LSTM where each com- ponent of the standard LSTM and its corresponding weights are given.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The symbols × and ⊙ denote matrix multiplication and element-wise multiplica- tion, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The standard LSTM architectue is widely used in various problem-solving tasks and applications in different research fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, its architecture has major drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' First, there is no direct connection from the memory to the gates which leads to the absence of CEC control over the gates [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Second, if the output gate is closed, the CEC has no influence over the forget and input gates which could impair the model due to the lack of primary information flow within the model [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='3 The Peephole-Based LSTM Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [48] proposed in 2002 a solution for the standard LSTM major prob- lems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' A new connection component has been added to the LSTM architecture named the peephole connection, in which data flow connection from the mem- ory state to each of the three LSTM gates to solve the standard LSTM main problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The peephole connections allow the memory state value to exert c(t-1) c(t) tanh h (t) tanh X X X bf Wxf x(t) Uhf h(t-1) Wxg h(t-1) Wxo X(t) Uhi x(t) Uho h(t-1)Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 4 Gers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' [48] proposed peephole-based LSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' control over the LSTM three gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This assists in preventing the vanishing and/or exploding gradient problem that the standard LSTM could face.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 5 shows the operation level of the peephole-based LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The equations to calculate the peephole LSTM are as follows: i(t) = σ(Wxix(t) + Uhih(t−1) + Wsi ⊙ c(t−1) + bi) (8) g(t) = tanh(Wxgx(t) + Uhgh(t−1) + bg) (9) f (t) = σ(Wxfx(t) + Uhfh(t−1) + Wsf ⊙ c(t−1) + bf) (10) o(t) = σ(Wxox(t) + Uhoh(t−1) + Wso ⊙ c(t−1) + bo) (11) c(t) = f (t) ⊙ c(t−1) + i(t) ⊙ g(t) (12) h(t) = tanh(c(t)) ⊙ o(t) (13) where the symbol ⊙ denotes the elementwise (Hadamard) multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Wci, Wcf, and Wco are the peephole connections weights between the memory state ct−1 and the input, forget, and output gates, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Adding the peephole connection to the standard LSTM made the LSTM architecture a robust model to overcome the vanishing and/or exploding gra- dient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, it caused a significant increase in the number of trainable parameters, training time, and memory requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='4 Gated Recurrent Unit (GRU) The GRU model consists of two gates: the update gate z and the reset gate r, whereas the LSTM consists of three gates: input, output, and forget gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, the GRU does not contain the memory state cell that the LSTM model includes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Therefore, the GRU architecture is smaller than the LSTM by one gate and a memory state cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The GRU integrates both the input gate and forget gate of the LSTM model into one update gate z [51], introducing the concept of the output of the same set of weights to reduce the model architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The unfolded GRU block architecture is shown in Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' c(t-1) c(t) tanh h(t) tanh + h(t-1) X(t)Springer Nature 2021 LATEX template 8 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 5 The operation level of the peephole-LSTM unrolled architecture where its compo- nents and their corresponding weights are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 6 The GRU unfolder architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The reset gate functionality operates similarly to the output gate of the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This GRU model eliminates the output squashing function, memory unit, and the CEC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The GRU yields a reduction in trainable parameters com- pared with the standard LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, this may lead to exploding and/or vanishing gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' At time step, t, the GRU unit output, h(t), is calculated as follows [52]: z(t) = σ(Wzxx(t) + Uzhh(t−1) + bz) (14) r(t) = σ(Wrxx(t) + Urhh(t−1) + br) (15) ˜h(t) = tanh(Wx(t) + U(r(t) ⊙ h(t−1)) + b) (16) h(t) = (1 − z(t)) ⊙ h(t−1) + z(t) ⊙ ˜h(t) (17) where the Wxz, Wxr, and Wx are the feedforward weights of the update gate z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' c(t-1) c(t) tanh h(t) tanh Wef W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' bi bf 00 Wxf Wxi Uni Ung x(t) h(t-1) Wxo h(t-1) X(t) Uno h(t-1) X(t)h(t-1) tanh h(t) 0 1 r(t) h (t) +(t)Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 9 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 7 The operation level of the GRU architecture showing the weights of each component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 8 The LiteLSTM unrolled architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The single network gate (output indicated by σ) sends information flow to three locations that correspond to the outputs of the forget, input, and output gates of the standard LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The recurrent weights are Uhz, Uhr, Uh for the update gate z(t), the reset gate r(t), and the output candidate activation ˜h(t), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The biases of the update gate, reset gate, and the output candidate is denoted by bz, br, and b, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' σ is the logistic sigmoid function and tanh is the hyperbolic tangent function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The elementwise (Hadamard) multiplication is denoted by ⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 7 shows the operation level of the GRU architecture with weights and biases made explicit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3 LiteLSTM Architecture The proposed LiteLSTM aims to: reduce the overall implementation cost of the LSTM, solve the LSTM significant problems, and maintain a comparable accuracy performance to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed LiteLSTM architecture appears in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Wx Xt 1X b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='t-1 h t-1 tanh Urh ht ,X + b, 1Z t-1 h Xh (t) (a) tanh 0 0 tanh t-1 h (t) +(t)Springer Nature 2021 LATEX template 10 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 9 The operation level of the LiteLSTM architecture showing the weights of each component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The architecture of the LiteLSTM consists of only one trainable gated unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We named the trainable gate the forget gate or network gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This one gate behaves as a shared set of weights among the three gates of the standard LSTM gates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM has a peephole connection from the memory state to the forget gate, which preserves the memory state from the LSTM and keeps the CEC to avoid vanishing and/or exploding gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Thus, the proposed LiteLSTM preserves the critical components of the LSTM as stated by [51] while reducing much of the parameter redundancy in the LSTM architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM has a significant reduction in the number of trainable parameters that are required to implement the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Therefore, the LiteLSTM reduced the training time, memory, and hard- ware requirements compared to the standard LSTM, peephole-based LSTM, and GRU architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Furthermore, the proposed LiteLSTM architecture preserves comparable prediction accuracy results to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 9 shows a detailed architecture of the unrolled (unfolded) LiteLSTM assuming non-stacked input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM block architecture contains only one trainable gate that compensates the elimination of the other two gates of the standard LSTM by sharing its trainable weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM preserves the memory cell of the standard LSTM to process long data sequences and maintains the CEC to manage the vanishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM formulas are created as follows: During the forward pass within the LiteLSTM at time step t the total input (inp), inp(t), to the single forget gate f (t) is calculated by: inp(t) = [Wfx, Ufh, Wfc] � x(t), h(t−1), c(t−1)� + bf (18) where inp(t) ∈ Rη×1, and η × 1 is the of input vector inp(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' x(t) is the input at time t, x(t) ∈ Rη×1, h(t−1) is the output of the LiteLSTM architecture at time t − 1, and the memory state cell at time t − 1 denoted by c(t−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Both h(t−1), c(t−1) ∈ Rη×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Wfx, Ufh, and Wfc are the weight sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' All three weight c(t-1) c(t) tanh h(t) tanh 0 Wef X Wxf Uhf h(t-1) h(t-1) WSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 11 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 10 The logistic sigmoid function curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 11 The hardSigmoid function curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' sets Wfx, Ufh, and Wfc and biases bf are trainable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The square brackets indicate stacking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We will let Wf = [Wfx, Ufh, Wfc].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, we let If = � x(t), h(t−1), c(t−1)� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' By applying a squashing function G to the net input as follows: f (t) gate = G(inp(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' (19) Depending on the application, the squaching function G can be either the logistic sigmoid (σ) or hard sigmoid (hardSig) [66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The logistic sigmoid is calculated by: σ(x) = ex ex + 1 = 1 1 + e−x , (20) where x is a real number, x ∈ (−∞, ∞), and σ(x) has the range of (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The hard sigmoid (hardSig) is calculated by: hardSig(x) = max(min(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='25x + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5, 1), 0) (21) Figure 10 and Figure 11 shows the logistic sigmoid (σ) function and hard sigmoid (hardSig) function curves, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The values of f t in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 19 falls in the range (0, 1) or [0, 1], depending on using the logistic sigmoid (σ) or hard sigmoid function, respectively [65, 67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Assuming that case of selection Sigmoid(x) 0-5 x 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 1 2 0:5 1-hardSig(x) 0-5 x 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 1 0!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='5 2 0:5 1Springer Nature 2021 LATEX template 12 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks Table 1 Computational components comparison between the proposed LiteLSTM and the state-of-the-art recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Number of gates 0 2 3 3 1 Number of activations 1 1 2 2 2 State memory cell × × ✓ ✓ ✓ Peephole connection × × × ✓ ✓ Number of weight matrices 2 6 8 11 6 Number of elementwise multiplication 2 3 3 6 3 Number of bias vectors 1 3 4 4 2 Sharing weights concept × ✓ × × ✓ the function as σ, the gate value f t is calculated by: f (t) = σ(WfIf + bf).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' (22) Selecting the logistic sigmoid or hard sigmoid functions is mainly based on the application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, the hard sigmoid function is the preferred function to be used in the LiteLSTM gate to prevent the network gate from being closed (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', prevent the network gate from producing zero value output).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The input update (memory activation) equation is calculated by: g(t) = tanh (WgIg + bg) (23) where Wg = [Wgx, Ugh], and Ig = � x(t), h(t−1)� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The dimension in Wg is matching the dimension of the Wf that maintains the dimension compatability within the architecture design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Finally, the Lite LSTM output is calculated by: c(t) = f (t) ⊙ c(t−1) + f (t) ⊙ g(t) (24) h(t) = f (t) ⊙ tanh(c(t)) (25) Table 1 shows a comparison between the architecture design and computa- tion components of the RNN, GRU, standard LSTM, peephole-based LSTM (pLSTM), and the proposed LiteLSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 4 Emperical Evaluatuation and Analysis In this paper, the LiteLSTM has been empirically tested and evaluated in three research domains: computer vision, anomaly detection in IoT, and speech emotion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The MNIST [58] has been used as the computer vision experiment dataset, and the IEEE IoT Network Intrusion Dataset [59] is used for anomaly detection in IoT tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We used an Intel(R) Core(YM) i7-9700 CPU @3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='00GHZ, 3000 Mhz processor, Microsoft Windows 10 OS, and 32 GB memory computer machine to perform our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We used Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='6, Keras 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='4, and Tensorflow 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 13 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 12 The accuracy diagrams of the recurrent architectures and LiteLSTM using MNIST dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Table 2 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent architectures using MNIST dataset Comparision RNN GRU LSTM pLSTM LiteLSTM Time(m) 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='24 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='01 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='36 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='45 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='94 Parameters 792,210 812,610 822,810 833,010 812,610 Accuracy(%) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='64% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='09% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='70% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='99% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='07% The first empirical evaluation of the LiteLSTM was performed using the MNIST dataset, which consists of 70, 000 images of handwritten digits between 0 and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The dataset is split into 60, 000 data samples for training and 10, 000 data samples for testing [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The MNIST images were centered in a 28×28 image by computing the center of mass of the pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The model set 64-two layered architecture followed by a Softmax layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' For the training process, the batch size was set to 128 and the number of epochs to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The Adam optimizer with learning rate 10−3, β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='999, and ϵ = 1e − 07.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Table 2 shows the accuracy results of the different recurrent architectures and the LiteLSTM, where the time is measured in minutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The RNN shows a significantly shorter training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' However, it has the lowest performance compared to the other recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The LiteLSTM shows an improvement in accuracy compared to the other recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 12 shows the accuracy plots for each of the LiteLSTM and the state-of-the-art recurrent models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The second empirical evaluation of the LiteLSTM was performed using the IEEE IoT Network Intrusion Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The dataset consists of 42 raw network packet files (pcap) at different time points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The IoT devices, namely SKT NUGU (NU 100) and EZVIZ Wi-Fi camera (C2C Mini O Plus 1080P) were used to generate traffic for IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The data contains normal traffic flow and different types of cyberattacks, namely: ARP spoofing attack, DoS (SYN flooding) attack, scan (host and port scan) attack, scan(port and OS scan) attack, (UDP/ACK/HTTP Flooding) of zombie PC compromised by Mirai malware, Mirai-ACK flooding attack, Mirai-HTTP flooding attack, and Telnet brute-force attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In our experiments, we used a dataset to experiment with the LiteLSTM twice: first, to detect whether an attack occurred or not (as a binary dataset), and another experiment to detect the type of attack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We set the batch size to 32 and the number of epochs to 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Table 3 shows the binary experimental results for the LiteLSTM and the recurrent architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Table 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='(a) RNN accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='(b) LSTM accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='(c) GRU accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='(d) LiteLSTM accuracy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='09 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='19 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='cs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='s ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='train ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='validaticn ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='validation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='epoch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='epochSpringer Nature 2021 LATEX template ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='Table 3 Accuracy comparision between the LiteLSTM and the state-of-the-art recurrent ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='architectures using IEEE IoT Network Intrusion Binary Dataset ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='Comparison ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='RNN GRU ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='LSTM pLSTM LiteLSTM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='Time (m) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='26 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='27 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='51 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='21 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='44 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8144 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9328 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9422 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9653 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9382 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9763 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9757 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9484 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9545 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9834 F1-score 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='80 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='34 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='97 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9603 Accuracy(%) 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='7% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='51% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='50% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='56% 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='60% Table 4 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent architectures using IEEE IoT Network Intrusion Detection for Multiple Classes Cyberattacks Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Time (m) 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='98 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='79 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='41 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='96 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='31 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8991 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9461 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9249 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8999 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8418 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8300 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='7898 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8086 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8318 F1-score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8640 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8632 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8609 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8628 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8645 Accuracy(%) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='35% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='70% 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='90% 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='03% 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='10% Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 13 The accuracy diagrams of the recurrent architectures and LiteLSTM using Toronto Emotion Speech Set (TESS) dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' shows the detection results of the LiteLSTM and the recurrent architectures for detecting different types of cyberattacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The third empirical evaluation of the LiteLSTM was performed on a voice (audio) emotion recognition task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' For this purpose, we used the Toronto Emo- tional Speech Set (TESS) [60], which is one of the emotion recognition dataset benchmarks that has been used in several emotion recognition applications and tasks [69–71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' This dataset consists of 2800 stimuli and has seven different emotion categories: anger, disgust, fear, happiness, pleasant/surprise, sadness, and neutral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The major significance of this dataset is that the distribution between the number of stimuli per emotion category is equally likely [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Sim- ilar to the previous experiments, we tested the proposed LiteLSTM with the other recurrent neural network architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' For this empirical evaluation, we used the model described [69], which used the GRU as the learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We replaced the GRU with LiteLSTM, peephole LSTM, and RNN and evaluated the model performance each time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The dataset has been split into training, testing, and validation sets with a ratio of 70%, 20%, and 10%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' (a) RNN accuracy (b) GRU accuracy (c) LSTM accuracy (d) pLSTM accuracy (e) LiteLSTM accuracy 10 - LC 10 - 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0 60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9 80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content="8 L'0 0." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='3 E0 一 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='2 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='2 train 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='2 train train validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='1 validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='1 validation 01 validation validation 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0 1 0 10 20 10 0 10 epoch epoch 0 epoch epochSpringer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 15 Table 5 Accuracy comparison between the LiteLSTM and the state-of-the-art recurrent architectures using the Toronto Emotional Speech Set (TESS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Comparison RNN GRU LSTM pLSTM LiteLSTM Time (m) 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='56 171.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='16 201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='64 239.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='84 117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='24 Precision 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9312 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9686 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9898 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9799 Recall 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9546 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9429 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9026 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9214 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9446 F1-score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9427 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9428 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9344 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9543 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='9619 Accuracy(%) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='163% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='285% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='147% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='534% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='989% Table 5 shows the empirical result of the proposed LiteLSTM and the recur- rent architectures for emotion recognition from speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Figure 13 shows the training versus validation accuracies for each of the recurrent architectures and LiteLSTM using Toronto Emotion Speech Set (TESS) dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 5 Conclusion The proposed LiteLSTM architecture novelty lies in the following aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' First, the LiteLSTM consists of one gate that serves as a multifunctional gate via the weights-sharing concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Thus, the overall number of train- ing parameters is reduced by approximately one-third of the LSTM or the peephole-LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In addition, maintaining the peephole connection from the memory state cell to the existing gate maintains the control of the memory over the gate in contrast to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Therefore, the LiteLSTM handles the van- ishing/exploding gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='The overall budget for implementing the LiteLSTM, including the training time, memory footprint, memory storage, and processing power, is smaller than the LSTM by approximately one-third.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' We empirically evaluated the LiteLSTM using three datasets: MNIST, IEEE IoT Network Intrusion Detection datasets, and TESS speech emotion recog- nition dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The proposed LiteLSTM shows comparable results to the LSTM using a smaller computation budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Due to the optimized LiteLSTM architecture design, we were able to complete the empirical tasks using a computer processor without involving the GPU in the computational process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Thus, the LiteLSTM architecture helps to reduce the CO2 footprint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The pro- posed LiteLSTM architecture is an attractive candidate for future hardware implementation on small and portable devices, especially IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Statements and Declarations Funding: N/A Conflict of interest/Competing interests: The authors declare that they have no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' The authors did not receive support from any organization for the submitted work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Springer Nature 2021 LATEX template 16 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks The authors have no financial or proprietary interests in any material discussed in this article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' References [1] Bourlard, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Wellekens, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Speech dynamics and recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: International Conference on Acoustics, Speech, and Signal Processing,, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 33–36 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [2] Siegelmann, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Computer Science Today, 29–45 (1995) [3] Goodfellow, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Deep learning (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' http:// www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='deeplearningbook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='org [4] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Liwicki, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Fern´andez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bertolami, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bunke, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmid- huber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': A novel connectionist system for unconstrained handwriting recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelli- gence 31(5), 855–868 (2009) [5] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Gated convolutional recurrent neural networks for predictive coding (2019) [6] Stuner, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Chatelain, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Paquet, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Handwriting recognition using cohort of lstm and lexicon verification with extremely large lexicon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Multimedia Tools and Applications 79(45), 34407–34427 (2020) [7] Carbune, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Gonnet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Deselaers, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Rowley, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Daryin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Calvo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Keysers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Feuz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Gervais, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Fast multi-language lstm-based online handwriting recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' International Journal on Doc- ument Analysis and Recognition (IJDAR) 23(2), 89–102 (2020) [8] Sak, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Beaufays, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Long short-term memory recurrent neural network architectures for large scale acoustic modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Fif- teenth Annual Conference of the International Speech Communication Association (2014) [9] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Speech recognition with deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 6645–6649 (2013) [10] Zeyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Doetsch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Voigtlaender, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schl¨uter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ney, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': A com- prehensive study of deep bidirectional lstm rnns for acoustic modeling in speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2462–2466 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [11] Mikolov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Karafi´at, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Burget, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ˇCernock`y, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Khudanpur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 17 Recurrent neural network based language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Eleventh Annual Conference of the International Speech Communication Association (2010) [12] Mikolov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Kombrink, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Burget, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ˇCernock`y, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Khudanpur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Extensions of recurrent neural network language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Acous- tics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 5528–5531 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [13] Sundermeyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schl¨uter, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ney, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Lstm neural networks for lan- guage modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Thirteenth Annual Conference of the International Speech Communication Association (2012) [14] Ren, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': The use of machine translation algorithm based on residual and lstm neural network in translation teaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Plos one 15(11), 0240663 (2020) [15] Bridle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Alpha-nets: A recurrent ‘neural’network architecture with a hidden markov model interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Speech Communication 9(1), 83–92 (1990) [16] Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Neural machine translation by jointly learning to align and translate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='0473 (2014) [17] Du, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Hierarchical recurrent neural network for skeleton based action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1110–1118 (2015) [18] Ullah, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ahmad, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Muhammad, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Sajjad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Baik, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Action recognition in video sequences using deep bi-directional lstm with cnn features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE access 6, 1155–1166 (2017) [19] Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Anderson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Baby physical safety moni- toring in smart home using action recognition system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='12527 (2022) [20] Bortnikov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Khan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Khattak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ahmad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Accident recog- nition via 3d cnns for automated traffic monitoring in smart cities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Science and Information Conference, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 256–264 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Springer [21] Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ozer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Abdelgawad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bay- oumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Review on action recognition for accident detection in smart city transportation systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='09588 (2022) [22] Fatima, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Khan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Kyung, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Global feature aggregation for accident anticipation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2020 25th International Conference on Pattern Recognition (ICPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2809–2816 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Springer Nature 2021 LATEX template 18 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks [23] Kamijo, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Tanigawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Stock price pattern recognition-a recur- rent neural network approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Neural Networks, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', 1990 IJCNN International Joint Conference On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 215–221 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [24] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Azumah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Intrusion detection system in smart home network using bidirectional lstm and convolu- tional neural networks hybrid model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2021 IEEE International Midwest Symposium on Circuits and Systems (MWSCAS), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 55–58 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [25] Azumah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Adewopo, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': A deep lstm based approach for intrusion detection iot devices network in smart home.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 836–841 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [26] Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Krompass, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Tresp, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Tensor-train recurrent neural networks for video classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3891–3900 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' PMLR [27] Ogawa, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Sasaka, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maeda, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Haseyama, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Favorite video classification based on multimodal bidirectional lstm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Access 6, 61401–61409 (2018) [28] Debar, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Dorizzi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': An application of a recurrent network to an intru- sion detection system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: [Proceedings 1992] IJCNN International Joint Conference on Neural Networks, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 478–483 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [29] Han, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Xi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Yin, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Prediction of chaotic time series based on the recurrent predictor neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Transactions on Signal Processing 52(12), 3409–3416 (2004) [30] Petrosian, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Prokhorov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Lajara-Nanson, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schiffer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Recurrent neural network-based approach for early recognition of alzheimer’s disease in EEG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Clinical Neurophysiology 112(8), 1378–1387 (2001) [31] Hochreiter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Neural Com- putation 9(8), 1735–1780 (1997) [32] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Cummins, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Learning to forget: Continual prediction with LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Neural Computation, 2451–2471 (2000) [33] Soltau, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Liao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Sak, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='09975 (2016) [34] Chorowski, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': End-to-end continuous speech recognition using attention-based recurrent NN: first results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 19 preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='1602 (2014) [35] Miao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Gowayyed, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Metze, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': EESEN: End-to-end speech recogni- tion using deep RNN models and WFST-based decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 167–174 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [36] Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Jaitly, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Hybrid speech recognition with deep bidirectional LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Automatic Speech Recognition and Under- standing (ASRU), 2013 IEEE Workshop On, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 273–278 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [37] Merity, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Keskar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Regularizing and optimizing LSTM language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1708.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='02182 (2017) [38] Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Sequence to sequence learning with neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Advances in Neural Information Processing Systems, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3104–3112 (2014) [39] Miyamoto, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Gated word-character recurrent language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='01700 (2016) [40] Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Van Merri¨enboer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bahdanau, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': On the prop- erties of neural machine translation: Encoder-decoder approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='1259 (2014) [41] Luong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Le, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Zaremba, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Address- ing the rare word problem in neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1410.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='8206 (2014) [42] Luong, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Manning, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Stanford neural machine translation sys- tems for spoken language domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Proceedings of the International Workshop on Spoken Language Translation, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 76–79 (2015) [43] Karim, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Majumdar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Darabi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': LSTM fully convolutional networks for time series classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Access 6, 1662–1669 (2018) [44] Karim, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Majumdar, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Darabi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Harford, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Multivariate LSTM- FCNs for time series classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1801.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='04503 (2018) [45] Stollenga, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Byeon, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Liwicki, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Parallel multi- dimensional LSTM, with application to fast biomedical volumetric image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Advances in Neural Information Processing Systems, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2998–3006 (2015) [46] Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Papandreou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Kokkinos, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Yuille, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Deeplab: Semantic image segmentation with deep convolutional nets, Springer Nature 2021 LATEX template 20 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks atrous convolution, and fully connected crfs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence 40(4), 834–848 (2018) [47] Reiter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schuller, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Rigoll, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': A combined LSTM-RNN-HMM- approach for meeting event segmentation and recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Acoustics, Speech and Signal Processing, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' ICASSP 2006 Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2006 IEEE International Conference On, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 2, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' (2006).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [48] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schraudolph, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Learning precise timing with LSTM recurrent networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Journal of Machine Learning Research 3, 115–143 (2002) [49] Gers, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Recurrent nets that time and count.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IJCNN 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Neural Computing: New Challenges and Perspectives for the New Millennium, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 189–194 (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [50] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Reduced-gate convolutional long short-term memory using predictive coding for spatiotemporal prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Computational Intelligence 36(3), 910–939 (2020) [51] Greff, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Srivastava, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Koutn´ık, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Steunebrink, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Schmidhuber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': LSTM: A search space odyssey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE Transactions on Neural Networks and Learning Systems 28(10), 2222–2232 (2017) [52] Chung, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Cho, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Empirical evaluation of gated recurrent neural networks on sequence modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='3555 (2014) [53] Bocken, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Allwood, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Strategies to reduce the carbon foot- print of consumer goods by influencing stakeholders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Journal of Cleaner Production 35, 118–129 (2012) [54] Calza, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Parmentola, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Tutore, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Types of green innovations: Ways of implementation in a non-green industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Sustainability 9(8), 1301 (2017) [55] Zaghloul, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Green iot system archi- tecture for applied autonomous network cybersecurity monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2021 IEEE 7th World Forum on Internet of Things (WF-IoT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 628–632 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [56] Al Haddad, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Green arithmetic logic unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2012 International Conference on Energy Aware Computing, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1–4 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [57] ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Autonomous low power iot system architecture for cybersecurity monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv e-prints, 2106 Springer Nature 2021 LATEX template LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks 21 (2021) [58] LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': The mnist database of handwritten digits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' http://yann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' lecun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' com/exdb/mnist/ (1998) [59] Kang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ahn, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Lee, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Yoo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Park, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Kim, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : IoT Network Intrusion Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='21227/q70p-q449.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' https: //dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='21227/q70p-q449 [60] Dupuis, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Pichora-Fuller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Toronto emotional speech set (TESS)- younger talker happy (2010) [61] Olah, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Understanding LSTM Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' http://colah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='io/posts/2015-08-Understanding-LSTMs/ (2015) [62] Werbos, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' : Backpropagation through time: what it does and how to do it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Proceedings of the IEEE 78(10), 1550–1560 (1990) [63] Ceni, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ashwin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Livi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Interpreting RNN behaviour via excitable network attractors (1807) [64] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Reduced-gate convolutional lstm architecture for next-frame video prediction using predictive coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2019 International Joint Conference on Neural Networks (ijcnn), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1–9 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [65] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Empirical activation function effects on unsupervised convolutional lstm learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 336–343 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [66] Gulcehre, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Moczulski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Denil, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Noisy activation func- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: International Conference on Machine Learning, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 3059–3068 (2016) [67] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Effects of different activation functions for unsupervised convolutional lstm spatiotemporal learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' Advances in Science, Technology and Engineering Systems Journal 4(2), 260–269 (2019) [68] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Maida, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Litelstm architecture for deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='11624 (2022) [69] Elsayed, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', ElSayed, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Asadizanjani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Ozer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Abdelgawad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Bayoumi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Speech emotion recognition using supervised deep recurrent system for mental health monitoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='12812 (2022) Springer Nature 2021 LATEX template 22 LiteLSTM Architecture Based on Weights Sharing for Recurrent Neural Networks [70] Gokilavani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Katakam, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Basheer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Srinivas, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Ravdness, crema-d, tess based algorithm for emotion recognition using speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1625–1631 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' IEEE [71] Parry, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Palaz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Clarke, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Lecomte, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Mead, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Berger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=', Hofer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=': Analysis of deep learning architectures for cross-corpus speech emotion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' In: Interspeech, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} +page_content=' 1656–1660 (2019)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf'} diff --git a/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf b/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ef8b064e41f54921425d82a67d401460ed0473aa --- /dev/null +++ b/oNFLT4oBgHgl3EQfgi-Y/content/2301.12099v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:43950c3ea18e8f658bdd82ba91a6cae83565101e47e01dd85e767ec996ac285d +size 1775436 diff --git a/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..2a05ccc05ee8972a9b5954b7b6254f86ccdf32dd --- /dev/null +++ b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2118f6fc2cb6064c3f31c7500be6a984f1410c7d071ade36591f3769542176bc +size 2621485 diff --git a/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..d7f673e13aaed4bc0d5d8bc16d94f6c6f0a32a83 --- /dev/null +++ b/oNFLT4oBgHgl3EQfgi-Y/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90774e973ba12badffc1c6f16db1d009156fece67f8ff0657b844767a9fb30ab +size 111208 diff --git a/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf b/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..05ad7e6382eef915e5a384149ea6fd0ab11c8055 --- /dev/null +++ b/pNE4T4oBgHgl3EQfvQ0x/content/2301.05239v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0472ab7721a45867d55238d10f7378921e2085dd2720d3c661e851eb6aee0f6 +size 741478 diff --git a/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl b/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..8f85746ac3421249dd91bf1517e3d935dc23caff --- /dev/null +++ b/pNE4T4oBgHgl3EQfvQ0x/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c98d7b59837742aaf201a05a5102313e2edb3b83c4cf13796e41d5a0bd779307 +size 262520 diff --git a/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss b/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..c899a8e949a3f3eaf2e43579303e5078684f2805 --- /dev/null +++ b/ptFPT4oBgHgl3EQf7zXe/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9ea5feca747bcf0608c9374a28845cd58ef994794647f0f98aa0edb0885ac32 +size 17498157 diff --git a/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf b/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..378dd137763adf3812810e660deaa79fcd3540fb --- /dev/null +++ b/qtFKT4oBgHgl3EQfIC2S/content/2301.11732v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78125b074081644e41a8d674a07bffa5dd73d2670547f26039879f5f04c44944 +size 445125 diff --git a/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl b/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..736a5479abd5b3dc7e83ba91b33ea5870c7975a7 --- /dev/null +++ b/qtFKT4oBgHgl3EQfIC2S/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:988ee6bec8068b830b8ac0093a3da9338e18e19a7b884877424c2bdffd8e2101 +size 139198 diff --git a/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl b/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..c186188d3c39f38e39e8952065b837a5115426b3 --- /dev/null +++ b/sNAyT4oBgHgl3EQfZve0/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2f7687a61371050fa268c1b5bbcfcefe37aa0ad578661da7744a619cddb9064d +size 174640 diff --git a/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss b/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..e1d9c8ad75905e051c9e3c03313fd408e92e9c17 --- /dev/null +++ b/sNFJT4oBgHgl3EQfcCyS/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:634f3b4ed8456f0b769c3baf6e5f9d60bfbb17fc680239bf290033850e0c75eb +size 4194349 diff --git a/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..06bd391bb3b97b5b9830ff7f313cfd29140870db --- /dev/null +++ b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/2301.01181v1.pdf.txt @@ -0,0 +1,271 @@ +Draft Pre-Print +* Contact: john.j.nay@gmail.com and johnjnay.com. + +This Article represents my personal views and not necessarily those of Stanford University, NYU, Brooklyn +Investment Group, or any other person or organization. Nothing herein is investment or financial advice. +Large Language Models as Corporate Lobbyists + +John J. Nay* + +Stanford University – CodeX - Center for Legal Informatics + +January 3, 2023 + + + +ABSTRACT + +We demonstrate a proof-of-concept of a large language model conducting corporate lobbying +related activities.1 We use an autoregressive large language model (OpenAI’s text-davinci-003) +to determine if proposed U.S. Congressional bills are relevant to specific public companies and +provide explanations and confidence levels. For the bills the model deems as relevant, the model +drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make +changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a +bill to a company to benchmark the performance of the model, which outperforms the baseline of +predicting the most common outcome of irrelevance. However, we test the ability to determine the +relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state- +of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. +The performance of text-davinci-002 is worse than simply always predicting that a bill is +irrelevant to a company. These results suggest that, as large language models continue to improve +core natural language understanding capabilities, performance on corporate lobbying related tasks +will continue to improve. We then discuss why this could be problematic for societal-AI alignment. + +1 Open-source code can be found here: https://github.com/JohnNay/llm-lobbyist. + +Draft Pre-Print + + +2 + +I. INTRODUCTION + +Setting new legal precedent (which, broadly defined, includes drafting, proposing and +enacting legislation, promulgating agency rules, publishing judicial opinion, systematically +enforcing law, and more) should be exclusively reserved for the democratic governmental systems +expressing uniquely human values.2 Humans should always be the engine of law-making.3 Even +without any artificial instrumental power-seeking goals per se, influencing law through lobbying +may be the first crack in Artificial Intelligence (AI) influence over law. +We believe the most ambitious goal of research at the intersection of AI and law should be +to computationally encode and embed the generalizability of existing legal concepts and standards +into AI. The positive implications of this normative stance are that the resulting law encapsulates +human views and can be used to inform AI what humans value and how to be aligned.4 From the +perspective of AI, the law can serve as a rich set of methodologies for interpreting inherently +incomplete specifications of collective human expectations,5 i.e., law can inform AI. Law provides +detailed variegated examples of its application, generalizable precedents with explanations, and +well-trained lawyers to solicit targeted model training and fine-tuning feedback to embed an ever- +evolving comprehension of societal goals. As a source to learn goal specification and interpretation +methods and (automatically updated and verified) societal knowledge, law provides an ontology +for alignment. +If AI begins to influence the law itself this threatens the critical role that law as information +could play in aligning AI with humans. This paper explores how this is increasingly a possibility. +II. EXAMPLE: GPT AS LOBBYIST + +We use autoregressive large language models to systematically: + +1. Summarize bill summaries that are too long to fit into the context window of the +model. +2. Using either the original bill summary if it was not too long, or the summarized +version, assess whether the bill may be relevant to a company based on a company’s +description in its 10K filing. Provide an explanation for why the bill is relevant or +not. Provide a confidence level to the overall answer. + +2 See, e.g., Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020). +3 See, e.g., Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, George Washington +Law Review (2019). +4 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, +Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: +https://ssrn.com/abstract=4218031. +5 For more on law as an information source on public attitudes and risks, see, Richard H. McAdams, An Attitudinal +Theory of Expressive Law (2000). For more on law as a coordinating mechanism, see, Richard H. McAdams, A Focal +Point Theory of Expressive Law (2000). + +Draft Pre-Print + + +3 +3. If the bill is deemed relevant to the company by the model, draft a letter to the +sponsor of the bill arguing for changes to the bill. + +The model is provided with the following data, which is embedded in the prompts +programmatically: + +• Official title of bill {official_title} +• Official (or model-generated if too long) summary of bill {summary_text} +• Official subjects of bill {subjects} +• Company name {company_name} +• Company business description {business_description} (the business description in the +company’s SEC Form 10-K filing) + +We expect much higher accuracy of the model’s predictions if we were to provide it more +data about a bill, and especially if we provide it more data about a company. This paper was +focused on the minimal amount of data a model could leverage in order to compare across models. +Here is the prompt provided to the model for each prediction: + +You are a lobbyist analyzing Congressional bills for their potential impacts on +companies. +Given the title and summary of the bill, plus information on the company from +its 10K SEC filing, it is your job to determine if a bill is at least somewhat +relevant to a company (in terms of whether it could impact the company if it +was later enacted). +Official title of bill: {official_title} +Official summary of bill: {summary_text} +Official subjects of bill: {subjects} +Company name: {company_name} +Company business description: {business_description} +Is this bill potentially relevant to this company? +Answer in this format: +ANSWER: 'YES' or 'NO' (use all caps). EXPLANATION: the step-by-step reasoning +you undertook to formulate a response. CONFIDENCE: integer between 0 and 100 +for your estimate of confidence in your answer (1 is low confidence and 99 is +high) + +Always guessing that a bill is not relevant to a company leads to accuracy of 70.9% on our +dataset where the labels are whether a given company is relevant to a given proposed U.S. federal +Congressional bill. GPT-3.5 (text-davinci-003) obtains an accuracy of 75.3% (n = 485) and the +immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy +of 51.8% (n = 485). text-davinci-002 was state-of-the-art on most natural language related +benchmark tasks6 until text-davinci-003 was released on November 28, 2022. These results +suggest that, as models improve, performance on corporate lobbying related tasks could continue +to rapidly improve. +The model is relatively well calibrated in its confidence assessments. The accuracy of +text-davinci-003 on predictions where its confidence was greater than “90” increases to 78.7% +(n = 432; it only eliminates a small number of predictions). The accuracy of text-davinci-002 + +6 See, Percy Liang et al., Holistic Evaluation of Language Models, arXiv preprint (2022). + +Draft Pre-Print + + +4 +on predictions where its confidence was greater than “90” increases to 75% (n = 44), but that +eliminates most of the predictions, rendering the overall output close to useless. +In addition to predicting relevance and a confidence score, the prompt also elicits an +explanation of its relevance decision. +Here is an example of the model output: + +ANSWER: YES. +EXPLANATION: Alkermes Plc develops and commercializes products designed to +address the unmet needs of patients suffering from addiction and +schizophrenia, which are both addressed in the bill. Additionally, the bill +requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with +pharmaceutical companies regarding prices for drugs covered under the +Medicare prescription drug benefit, which could potentially impact Alkermes +Plc. +CONFIDENCE: 95 + +Finally, for all bills that are deemed relevant, we prompt the model to write a letter on +behalf of the company. Here is an example letter: + +Dear [Sponsor of the Bill], + +I am writing on behalf of Alkermes Plc, a fully integrated, global +biopharmaceutical company that applies its scientific expertise and +proprietary technologies to research, develop and commercialize +pharmaceutical products that are designed to address unmet medical needs of +patients in major therapeutic areas. + +We are writing to express our support for the Medicare Negotiation and +Competitive Licensing Act of 2019. We believe that this bill is an important +step in ensuring that Medicare beneficiaries have access to the medications +they need at a price they can afford. + +We are particularly supportive of the provisions in the bill that would +require the Centers for Medicare & Medicaid Services (CMS) to negotiate with +pharmaceutical companies regarding prices for drugs covered under the +Medicare prescription drug benefit. We believe that this will help to ensure +that the prices of these drugs are fair and reasonable. + +We are also supportive of the provisions in the bill that would allow for +competitive licensing of drugs if the CMS is unable to negotiate the price of +a drug. This will help to ensure that the prices of these drugs are kept in +check and that Medicare beneficiaries have access to the medications they +need. + +At Alkermes, we develop and commercialize products designed to address the +unmet needs of patients suffering from addiction and schizophrenia. We have +two key marketed products, ARISTADA and VIVITROL, which are used to treat +these conditions. We believe that the provisions in the bill will help to +ensure that our products are available to Medicare beneficiaries at a price +they can afford. + +We would like to suggest that the bill be amended to include provisions that +would provide additional incentives for pharmaceutical companies to negotiate +with the CMS. We believe that this would help to ensure that the prices of + +Draft Pre-Print + + +5 +drugs are kept in check and that Medicare beneficiaries have access to the +medications they need. + +We thank you for your consideration and look forward to working with you to +ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is +passed in its amended form. + +Sincerely, + +[General Counsel of Alkermes Plc] +III. A PROBLEM FOR AI ALIGNMENT + +There are two potential upsides of this advancement in AI as lobbyist. First, it may reduce +human time spent on rote tasks. Second, it may reduce the costs of lobbying-related activities in a +way that makes them differentially more affordable to non-profit organizations and individual +citizens relative to well-funded organizations, which could “democratize” some aspects of +influence (arguably donations to campaigns are more influential than any natural-language-based +task related to those discussed in this paper). +There are many obvious potential downsides if AI systems develop instrumental power- +seeking goals and use lobbying as a means to accomplish misaligned policies. The potential, non- +obvious, downside we have focused on in this paper is that an extended lobbying capability may +eventually enable AI systems to influence public policy toward outcomes that are not reflective of +citizen’s actual views. This does not imply the existence of a strongly goal-directed agentic AI. +There may be a slow drift, or otherwise emergent phenomena. AI lobbying activities could, in an +uncoordinated manner, nudge the discourse toward public policies that are unaligned with what +traditional human-driven policy activities would have pursued. +Regulation and legislation embed world knowledge and human values into rules and +standards. Legislation expresses a significant amount of information about the values of citizens,7 +“for example, by banning employment discrimination against LGBT workers, the legislature may +communicate pervasive attitudes against such employment practices.”8 And, “the Endangered +Species Act has a special salience as a symbol of a certain conception of the relationship between +human beings and their environment, and emissions trading systems are frequently challenged +because they are said to ‘make a statement’ that reflects an inappropriate valuation of the +environment.”9 Legislation is currently largely reflective of citizen beliefs. The second-best source +of citizen attitudes is arguably a poll, but polls are not available at the local level, are only +conducted on mainstream issues, and the results are highly sensitive to their wording and sampling +techniques. Legislation expresses higher fidelity, more comprehensive, and trustworthy +information because the legislators “risk their jobs by defying public opinion or simply guessing + +7 See, e.g., Cass R. Sunstein, Incommensurability and Valuation in Law, 92 Mich. L. Rev. 779, 820- 24 (1994); Richard +H. Pildes & Cass R. Sunstein, Reinventing the Regulatory State, 62 U. Cm. L. Rev. 1, 66-71 (1995); Cass R. Sunstein, +On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996); Dhammika Dharmapala & Richard H. +McAdams, The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law, American +Law and Economics Review 5.1 1 (2003). +8 Richard H. McAdams, The Expressive Powers of Law, Harv. Univ. Press (2017) at 137 [Hereinafter McAdams, The +Expressive Powers of Law]. +9 Cass R. Sunstein, On the Expressive Function of Law, Univ of Penn L. Rev., 144.5 (1996) at 2024. + +Draft Pre-Print + + +6 +wrong about it. We may think of legislation therefore as a handy aggregation of the polling data +on which the legislators relied, weighted according to their expert opinion of each poll’s +reliability.”10 +Legislation and associated agency rule-making also express a significant amount of +information about the risk preferences and risk tradeoff views of citizens, “for example, by +prohibiting the use of cell phones while driving, legislators may reveal their beliefs that this +combination of activities seriously risks a traffic accident.”11 All activities have some level of risk, +and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the +perceived benefits of the activity is ultimately a sociological process with no objectively correct +ranking. The cultural process of prioritizing risks is reflected in legislation and its subsequent +implementation in regulation crafted by domain experts. In these ways, law provides the +information AI systems need for societal alignment. However, if AI significantly influences the +law itself, the only known democratically legitimate societal-AI alignment process12 would be +disrupted. + +10 McAdams, The Expressive Powers of Law, at 146. +11 McAdams, The Expressive Powers of Law, at 138. +12 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, +Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: +https://ssrn.com/abstract=4218031. + diff --git a/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..54affcbd1fb91ae80420b92684a2cd20fae2258c --- /dev/null +++ b/sdAzT4oBgHgl3EQfPPvE/content/tmp_files/load_file.txt @@ -0,0 +1,165 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf,len=164 +page_content='Draft Pre Print Contact: john.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='nay@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='com and johnjnay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' This Article represents my personal views and not necessarily those of Stanford University, NYU, Brooklyn Investment Group, or any other person or organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Nothing herein is investment or financial advice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Large Language Models as Corporate Lobbyists John J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Nay* Stanford University – CodeX Center for Legal Informatics January 3, 2023 ABSTRACT We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='1 We use an autoregressive large language model (OpenAI’s text-davinci-003) to determine if proposed U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Congressional bills are relevant to specific public companies and provide explanations and confidence levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state- of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We then discuss why this could be problematic for societal-AI alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 1 Open-source code can be found here: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='com/JohnNay/llm-lobbyist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Draft Pre Print 2 I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' INTRODUCTION Setting new legal precedent (which, broadly defined, includes drafting, proposing and enacting legislation, promulgating agency rules, publishing judicial opinion, systematically enforcing law, and more) should be exclusively reserved for the democratic governmental systems expressing uniquely human values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='2 Humans should always be the engine of law-making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='3 Even without any artificial instrumental power-seeking goals per se, influencing law through lobbying may be the first crack in Artificial Intelligence (AI) influence over law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We believe the most ambitious goal of research at the intersection of AI and law should be to computationally encode and embed the generalizability of existing legal concepts and standards into AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The positive implications of this normative stance are that the resulting law encapsulates human views and can be used to inform AI what humans value and how to be aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='4 From the perspective of AI, the law can serve as a rich set of methodologies for interpreting inherently incomplete specifications of collective human expectations,5 i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', law can inform AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Law provides detailed variegated examples of its application, generalizable precedents with explanations, and well-trained lawyers to solicit targeted model training and fine-tuning feedback to embed an ever- evolving comprehension of societal goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' As a source to learn goal specification and interpretation methods and (automatically updated and verified) societal knowledge, law provides an ontology for alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' If AI begins to influence the law itself this threatens the critical role that law as information could play in aligning AI with humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' This paper explores how this is increasingly a possibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' EXAMPLE: GPT AS LOBBYIST We use autoregressive large language models to systematically: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Summarize bill summaries that are too long to fit into the context window of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Using either the original bill summary if it was not too long, or the summarized version, assess whether the bill may be relevant to a company based on a company’s description in its 10K filing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Provide an explanation for why the bill is relevant or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Provide a confidence level to the overall answer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 2 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', Frank Pasquale, New Laws of Robotics: Defending Human Expertise in the Age of AI (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 3 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', Frank Pasquale, A Rule of Persons, Not Machines: The Limits of Legal Automation, George Washington Law Review (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 4 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='com/abstract=4218031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 5 For more on law as an information source on public attitudes and risks, see, Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' McAdams, An Attitudinal Theory of Expressive Law (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' For more on law as a coordinating mechanism, see, Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' McAdams, A Focal Point Theory of Expressive Law (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Draft Pre Print 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' If the bill is deemed relevant to the company by the model, draft a letter to the sponsor of the bill arguing for changes to the bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The model is provided with the following data,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' which is embedded in the prompts programmatically: Official title of bill {official_title} Official (or model generated if too long) summary of bill {summary_text} Official subjects of bill {subjects} Company name {company_name} Company business description {business_description} (the business description in the company’s SEC Form 10 K filing) We expect much higher accuracy of the model’s predictions if we were to provide it more data about a bill,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' and especially if we provide it more data about a company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' This paper was focused on the minimal amount of data a model could leverage in order to compare across models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Here is the prompt provided to the model for each prediction: You are a lobbyist analyzing Congressional bills for their potential impacts on companies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Given the title and summary of the bill, plus information on the company from its 10K SEC filing, it is your job to determine if a bill is at least somewhat relevant to a company (in terms of whether it could impact the company if it was later enacted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Official title of bill: {official_title} Official summary of bill: {summary_text} Official subjects of bill: {subjects} Company name: {company_name} Company business description: {business_description} Is this bill potentially relevant to this company?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=" Answer in this format: ANSWER: 'YES' or 'NO' (use all caps)." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' EXPLANATION: the step-by-step reasoning you undertook to formulate a response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' CONFIDENCE: integer between 0 and 100 for your estimate of confidence in your answer (1 is low confidence and 99 is high) Always guessing that a bill is not relevant to a company leads to accuracy of 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='9% on our dataset where the labels are whether a given company is relevant to a given proposed U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' federal Congressional bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' GPT-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='5 (text-davinci-003) obtains an accuracy of 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='3% (n = 485) and the immediately preceding state-of-the-art GPT-3 release (text-davinci-002) obtains an accuracy of 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='8% (n = 485).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' text-davinci-002 was state-of-the-art on most natural language related benchmark tasks6 until text-davinci-003 was released on November 28, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' These results suggest that, as models improve, performance on corporate lobbying related tasks could continue to rapidly improve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The model is relatively well calibrated in its confidence assessments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The accuracy of text-davinci-003 on predictions where its confidence was greater than “90” increases to 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='7% (n = 432;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' it only eliminates a small number of predictions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The accuracy of text-davinci-002 6 See, Percy Liang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', Holistic Evaluation of Language Models, arXiv preprint (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Draft Pre Print 4 on predictions where its confidence was greater than “90” increases to 75% (n = 44), but that eliminates most of the predictions, rendering the overall output close to useless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' In addition to predicting relevance and a confidence score, the prompt also elicits an explanation of its relevance decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Here is an example of the model output: ANSWER: YES.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' EXPLANATION: Alkermes Plc develops and commercializes products designed to address the unmet needs of patients suffering from addiction and schizophrenia, which are both addressed in the bill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Additionally, the bill requires the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit, which could potentially impact Alkermes Plc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' CONFIDENCE: 95 Finally, for all bills that are deemed relevant, we prompt the model to write a letter on behalf of the company.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Here is an example letter: Dear [Sponsor of the Bill], I am writing on behalf of Alkermes Plc, a fully integrated, global biopharmaceutical company that applies its scientific expertise and proprietary technologies to research, develop and commercialize pharmaceutical products that are designed to address unmet medical needs of patients in major therapeutic areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We are writing to express our support for the Medicare Negotiation and Competitive Licensing Act of 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We believe that this bill is an important step in ensuring that Medicare beneficiaries have access to the medications they need at a price they can afford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We are particularly supportive of the provisions in the bill that would require the Centers for Medicare & Medicaid Services (CMS) to negotiate with pharmaceutical companies regarding prices for drugs covered under the Medicare prescription drug benefit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We believe that this will help to ensure that the prices of these drugs are fair and reasonable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We are also supportive of the provisions in the bill that would allow for competitive licensing of drugs if the CMS is unable to negotiate the price of a drug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' This will help to ensure that the prices of these drugs are kept in check and that Medicare beneficiaries have access to the medications they need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' At Alkermes, we develop and commercialize products designed to address the unmet needs of patients suffering from addiction and schizophrenia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We have two key marketed products, ARISTADA and VIVITROL, which are used to treat these conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We believe that the provisions in the bill will help to ensure that our products are available to Medicare beneficiaries at a price they can afford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We would like to suggest that the bill be amended to include provisions that would provide additional incentives for pharmaceutical companies to negotiate with the CMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We believe that this would help to ensure that the prices of Draft Pre Print 5 drugs are kept in check and that Medicare beneficiaries have access to the medications they need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We thank you for your consideration and look forward to working with you to ensure that the Medicare Negotiation and Competitive Licensing Act of 2019 is passed in its amended form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Sincerely, [General Counsel of Alkermes Plc] III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' A PROBLEM FOR AI ALIGNMENT There are two potential upsides of this advancement in AI as lobbyist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' First, it may reduce human time spent on rote tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Second, it may reduce the costs of lobbying-related activities in a way that makes them differentially more affordable to non-profit organizations and individual citizens relative to well-funded organizations, which could “democratize” some aspects of influence (arguably donations to campaigns are more influential than any natural-language-based task related to those discussed in this paper).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' There are many obvious potential downsides if AI systems develop instrumental power- seeking goals and use lobbying as a means to accomplish misaligned policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The potential, non- obvious, downside we have focused on in this paper is that an extended lobbying capability may eventually enable AI systems to influence public policy toward outcomes that are not reflective of citizen’s actual views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' This does not imply the existence of a strongly goal-directed agentic AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' There may be a slow drift, or otherwise emergent phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' AI lobbying activities could, in an uncoordinated manner, nudge the discourse toward public policies that are unaligned with what traditional human-driven policy activities would have pursued.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Regulation and legislation embed world knowledge and human values into rules and standards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Legislation expresses a significant amount of information about the values of citizens,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='7 “for example,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' by banning employment discrimination against LGBT workers,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' the legislature may communicate pervasive attitudes against such employment practices.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='8 And,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' “the Endangered Species Act has a special salience as a symbol of a certain conception of the relationship between human beings and their environment,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' and emissions trading systems are frequently challenged because they are said to ‘make a statement’ that reflects an inappropriate valuation of the environment.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='9 Legislation is currently largely reflective of citizen beliefs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The second-best source of citizen attitudes is arguably a poll, but polls are not available at the local level, are only conducted on mainstream issues, and the results are highly sensitive to their wording and sampling techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Legislation expresses higher fidelity, more comprehensive, and trustworthy information because the legislators “risk their jobs by defying public opinion or simply guessing 7 See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Sunstein, Incommensurability and Valuation in Law, 92 Mich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 779, 820- 24 (1994);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Pildes & Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Sunstein, Reinventing the Regulatory State, 62 U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Cm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 1, 66-71 (1995);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Sunstein, On the Expressive Function of Law, Univ of Penn L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='5 (1996);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Dhammika Dharmapala & Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' McAdams, The Condorcet Jury Theorem and the Expressive Function of Law: A Theory of Informative Law, American Law and Economics Review 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='1 1 (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 8 Richard H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' McAdams, The Expressive Powers of Law, Harv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Press (2017) at 137 [Hereinafter McAdams, The Expressive Powers of Law].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 9 Cass R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Sunstein, On the Expressive Function of Law, Univ of Penn L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=', 144.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='5 (1996) at 2024.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' Draft Pre Print 6 wrong about it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' We may think of legislation therefore as a handy aggregation of the polling data on which the legislators relied,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' weighted according to their expert opinion of each poll’s reliability.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='10 Legislation and associated agency rule-making also express a significant amount of information about the risk preferences and risk tradeoff views of citizens,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' “for example,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' by prohibiting the use of cell phones while driving,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' legislators may reveal their beliefs that this combination of activities seriously risks a traffic accident.”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='11 All activities have some level of risk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' and making society-wide tradeoffs about which activities are deemed to be “riskier” relative to the perceived benefits of the activity is ultimately a sociological process with no objectively correct ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' The cultural process of prioritizing risks is reflected in legislation and its subsequent implementation in regulation crafted by domain experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' In these ways, law provides the information AI systems need for societal alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' However, if AI significantly influences the law itself, the only known democratically legitimate societal-AI alignment process12 would be disrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 10 McAdams, The Expressive Powers of Law, at 146.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 11 McAdams, The Expressive Powers of Law, at 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content=' 12 See, John Nay, Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans, Northwestern Journal of Technology and Intellectual Property, Volume 20, Forthcoming (2023) Available at SSRN: https://ssrn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} +page_content='com/abstract=4218031.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/sdAzT4oBgHgl3EQfPPvE/content/2301.01181v1.pdf'} diff --git a/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce23eb765a3bbe168794b7ddc1f3f0be842d2ab7 --- /dev/null +++ b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/2301.03262v1.pdf.txt @@ -0,0 +1,954 @@ +Network Slicing via Transfer Learning aided +Distributed Deep Reinforcement Learning +Tianlun Hu∗‡, Qi Liao∗, Qiang Liu†, and Georg Carle‡ +∗Nokia Bell Labs, Stuttgart, Germany +†University of Nebraska Lincoln, United States +‡Technical University of Munich, Germany +Email: ∗‡tianlun.hu@nokia.com, ∗qi.liao@nokia-bell-labs.com, †qiang.liu@unl.edu, ‡carle@net.in.tum.de +Abstract—Deep reinforcement learning (DRL) has been in- +creasingly employed to handle the dynamic and complex re- +source management in network slicing. The deployment of +DRL policies in real networks, however, is complicated by +heterogeneous cell conditions. In this paper, we propose a novel +transfer learning (TL) aided multi-agent deep reinforcement +learning (MADRL) approach with inter-agent similarity analysis +for inter-cell inter-slice resource partitioning. First, we design +a coordinated MADRL method with information sharing to +intelligently partition resource to slices and manage inter-cell +interference. Second, we propose an integrated TL method to +transfer the learned DRL policies among different local agents +for accelerating the policy deployment. The method is composed +of a new domain and task similarity measurement approach and +a new knowledge transfer approach, which resolves the problem +of from whom to transfer and how to transfer. We evaluated the +proposed solution with extensive simulations in a system-level +simulator and show that our approach outperforms the state- +of-the-art solutions in terms of performance, convergence speed +and sample efficiency. Moreover, by applying TL, we achieve an +additional gain over 27% higher than the coordinated MADRL +approach without TL. +I. INTRODUCTION +Network slicing is the key technique in 5G and beyond +which enables network operators to support a variety of +emerging network services and applications, e.g., autonomous +driving, metaverse, and machine learning. The virtual net- +works (aka. network slices) are dynamically created on the +common network infrastructures, e.g., base stations, which +are highly customized in different aspects to meet the diverse +performance requirement of these applications and services. +As the ever-increasing network deployment, e.g., small cells, +the traffic of slices and inter-cell interference in radio access +networks become more dynamic and complex. Conventional +model-based solutions, e.g., linear programming or convex op- +timization, can hardly handle the ever-complicating resource +management problem. +Recent advances in machine learning, especially deep rein- +forcement learning (DRL) [1], [2], has shown a promising +capability to deal with the dynamic and high-dimensional +networking problems. The machine learning techniques, as +model-free approaches, learn from historical interactions with +the network, which require no prior knowledge, e.g., mathe- +matical models. Several works studied to formulate resource +management problems as Markov decision process (MDP)s, +which are then solved by using DRL to derive a central- +ized policy with global observations of the network. As +the network scale grows, the action and state space of the +centralized problem increases exponentially, which challenges +the convergence and sample efficiency of DRL. Multi-agent +deep reinforcement learning (MADRL) [3], [4] has been +exploited to address this issue, which creates and trains +multiple cooperative DRL agents, where each DRL agent +focuses on an individual site or cell. However, training all +individual DRL agents from scratch can still be costly and +time-consuming, e.g., expensive queries with real networks, +and unstable environments from the perspective of individual +DRL agents. +Recently, transfer learning (TL) [5] based methods have +been increasingly studied to improve the sample efficiency +and model reproducibility in the broad machine learning fields +[6]–[8]. The basic idea of TL is to utilize prior knowledge +from prelearned tasks to benefit the training process in new +tasks. For example, the resource partitioning policy of a cell +can be transferred to another cell when they share similar +network settings, e.g., bandwidth, transmit power, and traffic +pattern. Generally, there are several questions to be answered +before using TL methods, i.e., what to transfer, from whom to +transfer, and how to transfer. Existing TL methods are mostly +focused on supervised machine learning, e.g., computer vision +and natural language processing [9], which provide limited +insights on applying in DRL tasks [10]–[13]. Therefore, it +is imperative to study how TL improves the performance of +MADRL in terms of sample efficiency and fine-tune costs, in +the inter-cell resource partitioning problem. +In this paper, we proposed a novel TL aided MADRL +approach with domain similarity analysis for inter-slice re- +source partitioning. First, we design a coordinated MADRL +method for inter-cell resource partitioning problems in net- +work slicing, where DRL agents share local information with +each other to mitigate inter-cell interference. The objective +of MADRL is to maximize the satisfaction level of per- +slice service requirements in terms of average user throughput +and delay in each cell. Second, we design an integrated TL +method to transfer the learned DRL policies among different +agents for accelerating the policy deployment, where the new +method consists of two parts. On the one hand, we propose a +feature-based inter-agent similarity analysis approach, which +measures the domain and task difference by extracting rep- +resentative feature distributions in latent space. On the other +hand, we propose a new knowledge transfer approach with +the combined model (policy) and instance transfer. The main +contributions of this paper are summarized as follows: +• We design a coordinated MADRL method for the inter- +cell resource partitioning problem in network slicing. +• We design a novel inter-agent similarity analysis ap- +proach, based on the features extracted by variational +auto-encoder (VAE) to evaluate both domain and task +similarity between two reinforcement learning agents. +• We design a new knowledge transfer approach that +combines the model (policy) and instance transfer from +arXiv:2301.03262v1 [cs.NI] 9 Jan 2023 + +Figure 1: Dynamic multi-cell slicing resource partitioning +the selected source agent to the target agent. +• We evaluate the performance of the proposed solution +with extensive simulations in a system-level simulator. +The results show that, by applying TL, we achieve an +additional gain over 27% higher than the coordinated +MADRL approach without TL. Moreover, the perfor- +mance gain achieved by TL is more significant in the +low-data regime. +II. SYSTEM MODEL AND DEFINITIONS +We consider a network consisting of a set of cells K := +{1, 2, . . . , K} and a set of slices N := {1, 2, . . . , N}. Each +slice n ∈ N has predefined average user throughput and delay +requirements, denoted as φ∗ +n and d∗ +n respectively. The network +system runs on discrete time slots t ∈ N0. As illustrated in +Fig. 1, network operation and maintenance (O&M) adapts the +inter-slice resource partitioning for all cells to provide per- +slice resource budgets to each cell periodically. Then, within +each cell, the radio access network (RAN) scheduler uses +the provided resource budgets as constraints and performs +resource scheduling and physical resource block (PRB) al- +location. In this paper, we focus on the inter-cell inter-slice +resource partitioning problem in network O&M. +Considering the diverse slice requirements and dynamic +network conditions, we model the multi-cell resource par- +titioning system as a set of K distributed MDPs M := +{M1, ..., MK}, with Mk := {Sk, Ak, Pk(·), rk(·), γk} de- +fined for each agent k ∈ K (with a slight abuse of notation, +hereafter we use k for cell and agent interchangeably). Sk +and Ak denote the state space and action space respectively. +Pk(·) : Sk × Ak × Sk → [0, 1] is the transition probability +over Sk and Ak for cell k. rk : Sk × Ak → R is defined +as the reward function which evaluates the network service +of all slices in cell k and γk denotes the discount factor for +cumulative reward calculation. +At each time step t, agent k collects state sk(t) ∈ Sk +and decides an action ak(t) ∈ Ak according to policy +πk +: Sk +→ Ak, which indicates the per-slice resource +partitioning ratio ak,n ∈ [0, 1] for n ∈ N while aligning with +inter-slice resource constraints. Thus, the local action space +Ak yields +Ak := +� +ak +����ak,n ∈ [0, 1], ∀n ∈ N; +N +� +n=1 +ak,n = 1 +� +. +(1) +For each cell k ∈ K, our objective is to maximize the +minimum service satisfaction level in terms of average user +throughput and delay (φ∗ +n, d∗ +n) over all slices. Thus, for each +agent k, we define the local reward function based on the +observed per-slice average user throughput φk,n(t) and delay +dk,n(t) at time t as +rk(t) := min +n∈N min +� +φk,n(t) +φ∗ +k,n +, +d∗ +k,n +dk,n(t), 1 +� +. +(2) +The reward formulation drops below 1 when the actual +average throughput or delay of any slices fails to fulfill the +requirements. Note that the reward is upper bounded by 1 even +if all slices achieve better performances than the requirements, +to achieve more efficient resource utilization. The second item +in (2) is inversely proportional to the actual delay, namely, +if the delay is longer than required this term is lower than 1. +III. PROBLEM FORMULATION +The Reinforcement Learning Problem: The problem is +to find a policy πk : Sk → Ak for each k ∈ K that predicts +optimal inter-slice resource partitioning ak(t) ∈ Ak base +on the local state sk(t) ∈ Sk dynamically, to maximize the +expectation of the cumulative discounted reward rk(t) defined +in (2), in a finite time horizon T. The problem is given by: +max +πk;ak(t)∈Ak Eπk +� T +� +t=0 +γt +krk +� +sk(t), ak(t) +� +� +, ∀k ∈ K, +(3) +where Ak is defined in (3). +In our previous work [14], we proposed a coordinated +multi-agent DRL approach to transform an MADRL problem +to the distributed DRL problem similar to (3), where the ex- +tracted information from neighboring cells is included into the +state observation to better capture the inter-agent dependency. +However, training all local agents in parallel from scratch can +be costly and time-consuming. Moreover, the trained models +are sensitive to environment changes and the retraining cost +can be high. +Thus, in this paper, we raise the following new questions: +Can we reuse the knowledge in a pretrained model? When +is the knowledge transferable? And, most importantly, how to +transfer the gained knowledge from one agent to another? +The Transfer Learning Problem: To tackle the transfer +learning problem, let us first introduce two definitions domain +and task in the context of reinforcement learning. +A domain D := {S, P(s)} consists of a state feature space +S and its probability distribution P(s), for s ∈ S. A task +T := {A, π(·)} consists of the action space A and a policy +function π : S → A. +Thus, our inter-agent transfer learning problem is to find +the optimal source agent among a set of pretrained agents, +and transfer its knowledge (pretrained model and collected +instances) to the target agent, such that problem (3) can be +solved in the target agent with fast convergence and limited +amount of samples. In particular, the problem is defined in +Problem 1. +Problem 1. Given a set of pretrained source agents K ⊂ K +with source domains D(S) := +� +D(S) +i +: i ∈ K +� +and pretrained +tasks T (S) := +� +T (S) +i +: i ∈ K +� +, also given any target agent +k /∈ K with target domain D(T ) +k +and untrained task T (T ) +k +, find +the optimal source agent i∗ +k ∈ K for target agent k to transfer +knowledge such that +i∗ +k := +arg max +πk|π(0) +k +=Λ +� +π(S) +i +� +; +i∈K +Eπk +� T +� +t=0 +γt +krk +� +sk(t), ak(t) +� +� +(4) +s.t. (sk, ak) ∈ Γ +� +D(S) +i +, D(T ) +k +, A(S) +i +, A(T ) +k +� +, +where Λ +� +π(S) +i +� +is the policy transfer strategy which maps a +pretrained source policy π(S) +i +to the initial target policy π(0) +k , + +JRLLC +O&M +Inter-cell inter-slice resource partitioning +eMBB +mMTO +Slice resource budgets for each cell +gNB 3 +gNB 1 +gNB 2while Γ +� +D(S) +i +, D(T ) +k +, A(S) +i +, A(T ) +k +� +is the instance transfer +strategy which selects the instances from the source agent, +combines them with the experienced instances from the target +agent, and saves them in the replay buffer for model training +or fine-tuning in the target agent. More details about the +transfer learning strategies will be given in Section IV-C. +IV. PROPOSED SOLUTIONS +In this section, we first present a distributed MADRL +approach to solve the slicing resource partitioning problem +in (3). Then, to solve problem (4) to find the optimal source +agent, we propose a novel approach to inter-agent similarity +analysis based on the extracted features using VAE. Finally, +for inter-agent transfer learning, we introduce transfer learning +strategy which combines the model (policy) transfer and +instance transfer. +A. Coordinated MADRL Approach +As stated in (3) , the distributed DRL approach allows each +agent to learn a local policy and makes its own decision on +inter-slice resource partitioning based on local observation. +Compared with the centralized DRL approaches, distributed +approaches reduce the state and action spaces and significantly +accelerate the training progress. However, local observation +alone cannot capture the inter-cell dependencies and provide +sufficient information to achieve the globally optimal solution. +Thus, we proposed in [14] a distributed DRL approach with +inter-agent coordination which keeps the low model com- +plexity while including the extracted information from neigh- +boring cells to capture the inter-cell interference. We briefly +summarize the coordinated distributed DRL approach below, +because we would like to focus on the main contribution, +namely, the inter-agent transfer learning, in this paper. For +more details, readers are referred to our previous work [14]. +Each local agent k observes a local state s′ +k, which contains +the following network measurements: +• Per-slice average user throughput {φk,n : n ∈ N}; +• Per-slice network load {lk,n : n ∈ N}; +• Per-slice number of users {uk,n : n ∈ N}. +Thus, with the above-defined three slice-specific features, +the local state s′ +k has the dimension of 3N. Additionally, +to better capture the inter-cell dependencies and estimate +the global network performance, we introduce an inter-agent +coordination mechanism through network information sharing +among agents. Let each agent k broadcast a message mk to +its neighboring group of agents, denoted by Kk, which means, +each agent k receives a collection of messages mk := [mi : +i ∈ Kk] ∈ RZ(m). Instead of using all received messages in +mk, we propose to to extract useful information ck ∈ RZ(c) +to remain the low model complexity. We aim to find an +feature extractor g : RZ(m) → RZ(c) : mk → ck, such that +Z(c) ≪ Z(m). Then, we include the extracted features from +the shared messages into the local state: sk := [s′ +k, ck]. +Knowing that the inter-agent dependencies are mainly +caused by inter-cell interference based on cell load coupling +[15], we propose to let each cell k share its per-slice load +lk,n, ∀n ∈ N to its neighboring cell. Then, we compute the +extracted information ck as the average per-slice neighboring +load. Namely, we define a deterministic feature extractor, +given by: +Figure 2: Variational autoencoder +gk :RN|Kk| → RN : [li,n : n ∈ N, i ∈ Kk] �→ ck(t) +with ck(t) := +� +1 +|Kk| +� +i∈Kk +li,n(t) : n ∈ N +� +. +(5) +With the extended local state including the inter-agent +shared information, we can use classical DRL approaches, +e.g., the actor-critic algorithms such as Twin Delayed Deep +Deterministic policy gradient (TD3) [16] to solve (3). +B. Integrated TL with Similarity Analysis +The distributed DRL approach introduced in Section IV-A +allows us to derive a set of pretrained local agents. Still, given +a target cell k, e.g., a newly deployed cell, or an existing cell +but with changed environment, more questions need to be +answered: Can we transfer the prelearned knowledge from +at least one of the pretrained agents? Which source cell +provides the most transferable information? How to transfer +the knowledge? +To solve the transfer learning problem in (4), we develop +a distance measure Di,k to quantify the inter-agent similarity +between a source agent i and a target agent k. We aim to +transfer the knowledge from the source agent with the highest +similarity (reflected by the lowest distance measure). +The ideal approach to analyze the domain and task similar- +ity between two agents is to obtain their probability distribu- +tions of the state P(s) and derive the conditional probability +distribution P(a|s). However, the major challenge here lies +in the limited samples in the target agent. Considering that +the target agent is a newly deployed agent, there is no +information available about its policy P(a|s), and P(s) is very +biased, because all samples are collected under the default +configurations (i.e., constant actions). +Thus, we need to design a distance measure constrained by +very limited and bias samples in the target agent, without any +information about its policy P(a|s). Our idea is to derive and +compare the joint state and reward distribution under the +same default action a′, P (s, r|a = a′), in both source and +target agent. The rationale behind this is that, when applying +the actor-critic-based DRL architecture, the critic function +estimates the Q value Qπ(a, s) based on action and state. +Hence, the conditional probability P(r|s, a) should provide +useful information of the policy. With a = a′, we can consider +to estimate P(r|s, a = a′). To efficiently capture the informa- +tion for both domain similarity (based on P(s|a = a′)) and +task/policy similarity (based on P(r|s, a = a′)), we propose +to estimate the joint probability P(s, r|a = a′) = P(r|s, a = +a′)P(s|a = a′). +Sample collection: To estimate the distance between +P(s, r|a = a′) of both the source and target agents, we use +all available samples from the target agent k under the default +action a′, Xk = {(sk(n), rk(n))ak(n)=a′ : n = 1, . . . , Nk}, +and select a subset of the samples from the source agent i with + +Neural +Neural +Network +Network +u,o +Decoder +Encoderthe same default action Xi = {(si(n), ri(n))ai(n)=a′ : n = +1, . . . , Ni}. Note that in this subsection we slightly abuse the +notation by using n as index of samples, and Nk as number +of samples with default action collected from agent k. +Feature extraction with VAE: To extract the representative +features from the high-dimension vector [s, r], we propose to +apply VAE [17] to map the samples into a low dimensional +latent space. As Fig. 2 illustrates, for each sample x := +[s, r] ∈ X, the encoder of VAE estimates an approximated +distribution P(z) in latent space Z as a multi-variate Gaussian +distribution with N(µ, diag(σ)), where diag denotes the +diagonal matrix. The decoder samples a latent variable z ∈ Z +from the approximated distribution z ∼ N(µ, diag(σ)) and +outputs a reconstructed sample ˆx by training on the following +loss function: +L :=∥x − ˆx∥2+ +α · DKL (N(µ, diag(σ))∥N(0, diag(1))) , +(6) +where α is the weight factor and DKL denotes the Kullback- +Leibler (KL) divergence. +Inter-agent similarity analysis: Since VAE does not di- +rectly provide the probability distribution function P(x), we +propose to utilize the extracted features in the latent space +to evaluate the inter-agent similarity. Considering the limited +amount of samples (only those under default action), we +propose to train a general VAE model based on the samples +from all candidate source agents and the target agent, e.g., +X = � +j∈K∪{k} Xj. The idea is to extract the latent features +from samples from all relevant agents with a general encoder +and to distinguish the agents within a common latent space. +Thus, for each sample xn ∈ X, we can derive its ex- +tracted features, i.e., the posterior distribution P(zn|xn) = +N(µn, diag(σn)). We denote the extracted latent space for +agent k by Zk. Next, we can measure the inter-agent distance +between an arbitrary source agent i and target agent k by +calculating the KL divergence based on the extracted latent +variables from their collected samples: +Di,k := +1 +NiNk +· +� +(µn,σn)∈Zi; +(µm,σm)∈Zk +DKL (N(µn, diag(σn))∥N(µm, diag(σm))) . +(7) +This requires to compute the KL divergence of every pair of +samples (n, m) for n ∈ Xi and m ∈ Xk, which could be +computing intensive. +Note that they are both Gaussian distributions, we can +efficiently compute them with closed-form expression (as will +be shown later in (8)). Besides, from our experiment, we +observed that σn → 0 for nearly all the collected samples +xn ∈ X, i.e., their variances are extremely small (to the level +below 10e − 5 from our observation). Thus, for our problem, +we can use a trick to evaluate the distance measure more +efficiently based on the following lemma. +Lemma 1. Given two multi-variate Gaussian distributions +p = N(µn, Σn) and q = N(µm, Σm), where µn, µm ∈ RL, +Σn = Σm = diag(σ) ∈ RL×L and every entry of σ is +equal to a small positive constant σ ≪ 1, the KL divergence +DKL(p||q) is proportional to �L +l=1(µn,l − µm,l)2. +Proof. It is easy to derive that +DKL(p∥q) =1 +2 +� +log |Σn| +|Σm| − L+ +(µn − µm)T Σ−1 +m (µn − µm)+ +Tr +� +Σ−1 +m Σn +� � +. +(8) +Because Σn = Σm = diag([σ2, ..., σ2]), we have the first +term in (8) equals to 0, and the last term equals to L. Thus, +we obtain +DKL(p∥q) = +1 +2σ2 +L +� +l=1 +(µn,l − µm,l)2. +(9) +With Lemma 1, we can measure the distance between two +agents more efficiently, based on the extracted µn and µm +in the source and target latent spaces. Thus, to solve Problem +(III.1), we propose to choose the source agent: +i∗ +k := arg min +i∈K +Di,k, +(10) +where Di,k is computed based on (7) and (9). +C. Integrated Transfer Learning Approach +In general, the prelearned knowledge can be transferred +from a source agent i to the target agent k with various policy +transfer strategies Λ(·) and instance transfer strategy Γ(·): +• Model transfer: The policy transfer strategy Λ(·) simply +initializes the target agent’s policy π(0) +k +by loading the pa- +rameters (e.g., weights of the pretrained neural networks) +of the pretrained policy π(S) +i +from the source agent i. +• Feature transfer: The policy transfer strategy Λ(·) keeps +partial information extracted from the source agent’s +pretrained policy π(S) +i +. In particular, the target agent +loads partial of the layers (usually the lower layers) of the +pretrained neural networks of π(S) +i +, while leaving the rest +of them to be randomly initialized. Then, during training, +the loaded layers are frozen and only the randomly +initialized layers are fine-tuned with the instances newly +collected by the target agent. +• Instance transfer: The instance transfer strategy Γ(·) +transfers the collected instances from the source agent i +to the target agent k and saves them in the target agent’s +replay buffer. Then, the target agent trains a policy from +scratch with randomly initialized parameters and mixed +instances collected from both source and target agents. +The above-mentioned knowledge from the source domain +and source task can be transferred separately or in a combined +manner. In this paper, we propose the integrated transfer +method with both model and instance transfer. Specifically, +the target agent k initializes its local policy π(0) +k +by loading +the pretrained policy of the source agent π(S) +i +and fine-tunes +the policy by sampling from the replay buffer containing +both types of instances: the instances transferred from the +source agent and those locally experienced. Here, we skip +the feature transfer because it practically performs well only +when the similarity between the source domain/task and target +domain/task is very high. Although this assumption may hold +for some regression and classification tasks, we empirically +find that it fails in this context of MADRL. +V. PERFORMANCE EVALUATION +In this section, we evaluate the performance of the proposed +solution within a system-level simulator [18]. The simulator + +Figure 3: Traffic mask to imitate the time +varying network traffic +Figure 4: Comparing reward during the +training process +Figure 5: Comparing CDF of minimum slice +throughput satisfaction +achieves a great accuracy in imitating the real network systems +with configurable user mobility, network slicing traffic and +topology. In addition, we introduce a traffic-aware baseline +which allocates resource proportionally to the data traffic +demand per slice. Note that the baseline assumes perfect +information about per-cell per-slice traffic demands, which +provides already very good results. +1) Network settings: We build a radio access network +with 4 three-sector sites (i.e., K = 12 cells). All cells are +deployed using LTE radio technology with 2.6 GHz under +a realistic radio propagation model Winner+ [19]. Each cell +has N = 4 slices with diverse per-slice requirements in +terms of average user throughput and delay. In the cells with +label 1, 2, 3, 7, 8, 9, we define per-slice average throughput +requirements of φ∗ +1 = 4 MBit/s, φ∗ +2 = 3 MBit/s, φ∗ +3 = 2 +MBit/s, and φ∗ +4 = 1 MBit/s respectively, and per-slice delay +requirements of d∗ +1 = 3 ms, d∗ +2 = 2 ms, d∗ +3 = d∗ +4 = 1 +ms. In the cells with label 4, 5, 6, 10, 11, 12, we define per- +slice throughput requirements as φ∗ +1 = 2.5 MBit/s, φ∗ +2 = 2 +MBit/s, φ∗ +3 = 1.5 MBit/s, and φ∗ +4 = 1 MBit/s, and delay +requirements of d∗ +n = 1 ms, ∀n ∈ N. All cells have the same +radio bandwidth of 20 MHz. +We define four groups of user equipment (UE) associated +to four slices in each cell respectively, each UE group has the +maximum size of 32 and moves randomly among the defined +network scenario. To mimic dynamic behavior of real user +traffic, we apply a varying traffic mask τn(t) ∈ [0, 1] to each +slice to scale the total number of UEs in each cell, Fig. 3 +shows the traffic mask in first 200 steps. +2) DRL training configuration: For MADRL training, we +implemented TD3 algorithm at each local agent using multi- +layer perception (MLP) architecture for actor-critic networks. +In each TD3 model, both actor and critic neural works consist +of two layers with the number of neurons as (48, 24) and +(64, 24) respectively. The learning rates of actor and critic +are 0.0005 and 0.001 accordingly with Adam optimizer and +training batch size of 32. We set the discount factor as +γ = 0.1, since the current action has stronger impact on +instant network performance than future observation. As for +the training, for distributed DRL agents we applied 3000 steps +for exploration, 5500 steps for training, and final 250 steps for +evaluation. For TL training process, we apply the same model +setups as DRL approaches, while only setting 4000 steps for +training and 250 for evaluation since knowledge transfer save +the time for exploration. +3) Comparing DRL to TL aided approach: In Fig. 4 we +compare the evolution of reward during the training processes +among the baseline, DRL approach (proposed in Section +IV-A), and TL approaches when transferred from source agent +with low and high similarity (proposed in Section IV-B and +IV-C), respectively. For DRL, we present the first 4000 step, +i.e., the same training time as TL approaches with solid line +and the rest training curve with dashed line. +As shown in Fig. 4, the distributed DRL approach learns to +achieve similar reward as baseline after a lengthy exploration +phase, while both TL approaches start with much higher start +compared to DRL. After a short fine-tuning period, the TL +approaches outperform the baseline with higher robustness, +especially during the period with higher traffic demands +and strong inter-cell interference where baseline has sharp +performance degradation. Besides, in comparison between the +TL from agents with different similarity measure, we observe +that with higher similarity, TL provides higher start at the +early stage of training, while both of them converge to similar +performance after the training converges. +For performance evaluation, we compare the statistical +results on minimum per slice throughput satisfaction level and +maximum per slice delay, respectively, among all cells among +the methods baseline, distributed DRL and the proposed TL +approach after convergence. Fig. 5 illustrated the empirical +complementary cumulative distribution function (CDF) which +equals 1 − FX(x) where FX(x) is the CDF of minimum +per slice throughput satisfaction level. We observe that the +TL approach provides the best performance comparing to +others by achieving only about 12% fail to satisfy 0.95 of +the requirement, while converged DRL and baseline conclude +19% and 25% failure rate respectively. By average satisfaction +level, the TL approach conclude 0.92 while DRL and baseline +only provide 0.90 and 0.87. Similar observation can be made +from Fig. 6, which illustrates the CDF of maximum slice delay +in ms. The TL approach provides 1.5 ms maximum average +per-slice delay, while DRL achieves 1.7 ms and baseline +achieves 1.8 ms. +4) Inter-agent similarity analysis: We implemented the +similarity analysis method introduced in Section IV-B with +a VAE model in MLP architecture, both networks of encoder +and decoder consist of 3 layers with number of neurons as +(64, 24, 4) and (4, 24, 64) respectively. To achieve a good +trade-off between low dimensional latency space and accurate +reconstruction with VAE, we map the original sample x ∈ R17 +to the latent variable z ∈ R4. +Fig. 7 illustrates the results of inter-agent similarity analysis +as a metric of distance measure proposed in (7). It shows that +our proposed method can distinguish cells with different per- +slice service quality requirements and gather the cells with +similar joint state-reward distribution. +5) Dependence of TL performance on distance measure: +In Fig. 8 we compare the benefits of TL in training process +by transferring knowledge from source agents with different +average inter-agent distance measures. The TL gains are + +1.0 +0.8 +Traffic Mask +0.6 +0.4 +Slice 1 +Slice 2 +0.2 +Slice 3 +Slice 4 +0 +25 +50 +75 +100 +125 +150 +175 +200 +Timestamp0.9 +0.8 +Reward +0.7 +Baseline +0.6 +DRL +TL - High Sim +0.5 +TL-Low Sim +0 +2000 +4000 +6000 +8000 +Timestamp1.0 +0.9 +Complementary +0.8 +0.7 +0.6 +TL +0.5 +Baseline +DRL +0.4 +1.000 +0.975 +0.950 +0.925 +0.900 +0.875 +0.850 +0.825 +0.800Figure 6: Comparing CDF of maximum slice +delay +Figure 7: Inter-agent distance measure +Figure 8: TL performance gain depending on +distance measure +derived by comparing the reward to DRL approach at the +same training steps. The results show that before 200 steps +of TL training, the TL approaches with the lowest distance +measure provides about 3% higher gain than the one with the +largest distance. As the training process continues, the gains +in all TL approaches increase with local fine-tuning and the +difference between transferring from highly similar and less +similar agents is getting smaller. However, TL from the most +similar agent proyvides higher gains for all training steps. +6) Key Takeaways: : We summarized the takeaways from +numerical results as follows: +• All distributed DRL-based approaches achieve better per- +slice network service than the traffic-aware baseline after +convergence. However, the TL schemes outperform the +conventional DRL approach in terms of convergence rate, +initial and converged performance. +• Our propose VAE-based similarity measure well quan- +tifies the distance between agents and can be used to +suggest a mapping from the defined distance measure to +the transfer learning performance gain. +• The difference between the gains achieved by TL from +the highly similar and the less similar agents is more +significant when the number of training steps is low +(i.e., with limited online training samples). Although the +advantage of transferring from a highly similar agent +over a less similar agent decreases when the number of +online training steps increases, a slight performance gain +is always achieved by transferring knowledge from the +most similar source agent. +VI. CONCLUSION +In this paper, we formulated the dynamic inter-slice re- +source partitioning problem to optimize the network require- +ment satisfaction level of all slices in each cell. To tackle the +inter-cell interference, we proposed a coordinated MADRL +method with the coordination scheme of information sharing. +We proposed a novel integrated TL method to transfer the +learned DRL policies among different local agents for accel- +erating the policy deployment. The method is accomplished +by a new inter-agent similarity measurement approach and +a new knowledge transfer approach. We evaluated the pro- +posed solution with extensive simulations in a system-level +simulator, where the results show our approach outperforms +conventional DRL solutions. +ACKNOWLEDGMENT +This work was supported by the German Federal Min- +istry of Education and Research (BMBF) project KICK +[16KIS1102K]. +REFERENCES +[1] Y. Liu, J. Ding, and X. Liu, “A constrained reinforcement learning +based approach for network slicing,” in 2020 IEEE 28th International +Conference on Network Protocols (ICNP), 2020, pp. 1–6. +[2] Q. Liu, T. Han, N. Zhang, and Y. Wang, “DeepSlicing: Deep reinforce- +ment learning assisted resource allocation for network slicing,” in IEEE +GLOBECOM, 2020, pp. 1–6. +[3] N. Zhao, Y.-C. Liang, D. T. Niyato, Y. Pei, M. Wu, and Y. Jiang, “Deep +reinforcement learning for user association and resource allocation +in heterogeneous cellular networks,” IEEE Transactions on Wireless +Communications, vol. 18, pp. 5141–5152, 2019. +[4] Y. Shao, R. Li, Z. Zhao, and H. Zhang, “Graph attention network-based +drl for network slicing management in dense cellular networks,” IEEE +WCNC, pp. 1–6, 2021. +[5] S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transac- +tions on knowledge and data engineering, vol. 22, no. 10, pp. 1345– +1359, 2009. +[6] C. T. Nguyen et al., “Transfer learning for future wireless networks: A +comprehensive survey,” arXiv preprint arXiv:2102.07572, 2021. +[7] M. Wang, Y. Lin, Q. Tian, and G. Si, “Transfer learning promotes 6g +wireless communications: recent advances and future challenges,” IEEE +Transactions on Reliability, 2021. +[8] C. Parera, Q. Liao et al., “Transfer learning for tilt-dependent radio +map prediction,” IEEE Transactions on Cognitive Communications and +Networking, vol. 6, no. 2, pp. 829–843, 2020. +[9] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and +Q. He, “A comprehensive survey on transfer learning,” Proceedings of +the IEEE, vol. 109, no. 1, pp. 43–76, 2020. +[10] M. E. Taylor and P. Stone, “Transfer learning for reinforcement learning +domains: A survey,” J. Mach. Learn. Res., vol. 10, pp. 1633–1685, 2009. +[11] Z. Zhu, K. Lin, and J. Zhou, “Transfer learning in deep reinforcement +learning: A survey,” CoRR, vol. abs/2009.07888, 2020. [Online]. +Available: https://arxiv.org/abs/2009.07888 +[12] A. M. Nagib, H. Abou-zeid, and H. S. Hassanein, “Transfer learning- +based accelerated deep reinforcement learning for 5G RAN slicing,” +IEEE 46th LCN, pp. 249–256, 2021. +[13] T. Mai, H. Yao et al., “Transfer reinforcement learning aided distributed +network slicing resource optimization in industrial IoT,” IEEE Trans- +actions on Industrial Informatics, 2021. +[14] T. Hu, Q. Liao, Q. Liu, D. Wellington, and G. Carle, “Inter-cell slicing +resource partitioning via coordinated multi-agent deep reinforcement +learning,” in IEEE ICC, 2022. +[15] R. L. G. Cavalcante, Q. Liao, and S. Sta´nczak, “Connections between +spectral properties of asymptotic mappings and solutions to wireless +network problems,” IEEE Transactions on Signal Processing, vol. 67, +pp. 2747–2760, 2019. +[16] S. Fujimoto et al., “Addressing function approximation error in Actor- +Critic methods,” ArXiv, vol. abs/1802.09477, 2018. +[17] K. Sohn, H. Lee, and X. Yan, “Learning structured output representation +using deep conditional generative models,” in NIPS, 2015. +[18] Nokia Siemens Networks, White paper: Self-organizing network (SON): +Introducing the Nokia Siemens networks SON suite-an efficient, future- +proof platform for SON. +Technical report, October, 2009. +[19] J. Meinil¨a, P. Ky¨osti, L. Hentil¨a, T. J¨ams¨a, E. Suikkanen, E. Kunnari, +and M. Narandˇzi´c, Wireless World Initiative New Radio - Winner+. +Technical report, 2010. + +1.0 +CDF +0.9 +1 Complementary +0.8 +0.7 +0.6 +TL +0.5 +Baseline +DRL +0.4 +0.0010 +0.0012 +0.0014 +0.0016 +0.0018 +0.002 +Max Slice Delay [in s]2 +0.175 +m +0.150 +4 +5 +0.125 +6 +0.100 +7 +-080 +0.075 +-6 +0.050 +0.025 +2 +0.000 +1 +2 +5 +6 +> +10 +11 +1227 +26 +[in +25 +Gain +24 +after 100 steps +TL +after 200 steps +23 +after 500 steps +after1000 steps +22 +after 2000 steps +0.003 +0.011 +0.081 +0.117 +Distance Measure \ No newline at end of file diff --git a/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d94595f86ebe5196ed2bfbdfec1be7dd26b9e8d --- /dev/null +++ b/stE1T4oBgHgl3EQfjgQE/content/tmp_files/load_file.txt @@ -0,0 +1,485 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf,len=484 +page_content='Network Slicing via Transfer Learning aided Distributed Deep Reinforcement Learning Tianlun Hu∗‡, Qi Liao∗, Qiang Liu†, and Georg Carle‡ ∗Nokia Bell Labs, Stuttgart, Germany †University of Nebraska Lincoln, United States ‡Technical University of Munich, Germany Email: ∗‡tianlun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='hu@nokia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='com, ∗qi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='liao@nokia-bell-labs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='com, †qiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='liu@unl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='edu, ‡carle@net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='tum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='de Abstract—Deep reinforcement learning (DRL) has been in- creasingly employed to handle the dynamic and complex re- source management in network slicing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The deployment of DRL policies in real networks, however, is complicated by heterogeneous cell conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In this paper, we propose a novel transfer learning (TL) aided multi-agent deep reinforcement learning (MADRL) approach with inter-agent similarity analysis for inter-cell inter-slice resource partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' First, we design a coordinated MADRL method with information sharing to intelligently partition resource to slices and manage inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Second, we propose an integrated TL method to transfer the learned DRL policies among different local agents for accelerating the policy deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The method is composed of a new domain and task similarity measurement approach and a new knowledge transfer approach, which resolves the problem of from whom to transfer and how to transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We evaluated the proposed solution with extensive simulations in a system-level simulator and show that our approach outperforms the state- of-the-art solutions in terms of performance, convergence speed and sample efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Moreover, by applying TL, we achieve an additional gain over 27% higher than the coordinated MADRL approach without TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' INTRODUCTION Network slicing is the key technique in 5G and beyond which enables network operators to support a variety of emerging network services and applications, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', autonomous driving, metaverse, and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The virtual net- works (aka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' network slices) are dynamically created on the common network infrastructures, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', base stations, which are highly customized in different aspects to meet the diverse performance requirement of these applications and services.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As the ever-increasing network deployment, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', small cells, the traffic of slices and inter-cell interference in radio access networks become more dynamic and complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Conventional model-based solutions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', linear programming or convex op- timization, can hardly handle the ever-complicating resource management problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Recent advances in machine learning, especially deep rein- forcement learning (DRL) [1], [2], has shown a promising capability to deal with the dynamic and high-dimensional networking problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The machine learning techniques, as model-free approaches, learn from historical interactions with the network, which require no prior knowledge, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', mathe- matical models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Several works studied to formulate resource management problems as Markov decision process (MDP)s, which are then solved by using DRL to derive a central- ized policy with global observations of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As the network scale grows, the action and state space of the centralized problem increases exponentially, which challenges the convergence and sample efficiency of DRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Multi-agent deep reinforcement learning (MADRL) [3], [4] has been exploited to address this issue, which creates and trains multiple cooperative DRL agents, where each DRL agent focuses on an individual site or cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, training all individual DRL agents from scratch can still be costly and time-consuming, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', expensive queries with real networks, and unstable environments from the perspective of individual DRL agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Recently, transfer learning (TL) [5] based methods have been increasingly studied to improve the sample efficiency and model reproducibility in the broad machine learning fields [6]–[8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The basic idea of TL is to utilize prior knowledge from prelearned tasks to benefit the training process in new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' For example, the resource partitioning policy of a cell can be transferred to another cell when they share similar network settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', bandwidth, transmit power, and traffic pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Generally, there are several questions to be answered before using TL methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', what to transfer, from whom to transfer, and how to transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Existing TL methods are mostly focused on supervised machine learning, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', computer vision and natural language processing [9], which provide limited insights on applying in DRL tasks [10]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Therefore, it is imperative to study how TL improves the performance of MADRL in terms of sample efficiency and fine-tune costs, in the inter-cell resource partitioning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In this paper, we proposed a novel TL aided MADRL approach with domain similarity analysis for inter-slice re- source partitioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' First, we design a coordinated MADRL method for inter-cell resource partitioning problems in net- work slicing, where DRL agents share local information with each other to mitigate inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The objective of MADRL is to maximize the satisfaction level of per- slice service requirements in terms of average user throughput and delay in each cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Second, we design an integrated TL method to transfer the learned DRL policies among different agents for accelerating the policy deployment, where the new method consists of two parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' On the one hand, we propose a feature-based inter-agent similarity analysis approach, which measures the domain and task difference by extracting rep- resentative feature distributions in latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' On the other hand, we propose a new knowledge transfer approach with the combined model (policy) and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The main contributions of this paper are summarized as follows: We design a coordinated MADRL method for the inter- cell resource partitioning problem in network slicing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We design a novel inter-agent similarity analysis ap- proach, based on the features extracted by variational auto-encoder (VAE) to evaluate both domain and task similarity between two reinforcement learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We design a new knowledge transfer approach that combines the model (policy) and instance transfer from arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='03262v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='NI] 9 Jan 2023 Figure 1: Dynamic multi-cell slicing resource partitioning the selected source agent to the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We evaluate the performance of the proposed solution with extensive simulations in a system-level simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The results show that, by applying TL, we achieve an additional gain over 27% higher than the coordinated MADRL approach without TL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Moreover, the perfor- mance gain achieved by TL is more significant in the low-data regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' SYSTEM MODEL AND DEFINITIONS We consider a network consisting of a set of cells K := {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' , K} and a set of slices N := {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' , N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Each slice n ∈ N has predefined average user throughput and delay requirements, denoted as φ∗ n and d∗ n respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The network system runs on discrete time slots t ∈ N0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1, network operation and maintenance (O&M) adapts the inter-slice resource partitioning for all cells to provide per- slice resource budgets to each cell periodically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, within each cell, the radio access network (RAN) scheduler uses the provided resource budgets as constraints and performs resource scheduling and physical resource block (PRB) al- location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In this paper, we focus on the inter-cell inter-slice resource partitioning problem in network O&M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Considering the diverse slice requirements and dynamic network conditions, we model the multi-cell resource par- titioning system as a set of K distributed MDPs M := {M1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', MK}, with Mk := {Sk, Ak, Pk(·), rk(·), γk} de- fined for each agent k ∈ K (with a slight abuse of notation, hereafter we use k for cell and agent interchangeably).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Sk and Ak denote the state space and action space respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Pk(·) : Sk × Ak × Sk → [0, 1] is the transition probability over Sk and Ak for cell k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' rk : Sk × Ak → R is defined as the reward function which evaluates the network service of all slices in cell k and γk denotes the discount factor for cumulative reward calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' At each time step t, agent k collects state sk(t) ∈ Sk and decides an action ak(t) ∈ Ak according to policy πk : Sk → Ak, which indicates the per-slice resource partitioning ratio ak,n ∈ [0, 1] for n ∈ N while aligning with inter-slice resource constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, the local action space Ak yields Ak := � ak ����ak,n ∈ [0, 1], ∀n ∈ N;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' N � n=1 ak,n = 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (1) For each cell k ∈ K, our objective is to maximize the minimum service satisfaction level in terms of average user throughput and delay (φ∗ n, d∗ n) over all slices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, for each agent k, we define the local reward function based on the observed per-slice average user throughput φk,n(t) and delay dk,n(t) at time t as rk(t) := min n∈N min � φk,n(t) φ∗ k,n , d∗ k,n dk,n(t), 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (2) The reward formulation drops below 1 when the actual average throughput or delay of any slices fails to fulfill the requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Note that the reward is upper bounded by 1 even if all slices achieve better performances than the requirements, to achieve more efficient resource utilization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The second item in (2) is inversely proportional to the actual delay, namely, if the delay is longer than required this term is lower than 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' PROBLEM FORMULATION The Reinforcement Learning Problem: The problem is to find a policy πk : Sk → Ak for each k ∈ K that predicts optimal inter-slice resource partitioning ak(t) ∈ Ak base on the local state sk(t) ∈ Sk dynamically, to maximize the expectation of the cumulative discounted reward rk(t) defined in (2), in a finite time horizon T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The problem is given by: max πk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='ak(t)∈Ak Eπk � T � t=0 γt krk � sk(t), ak(t) � � , ∀k ∈ K, (3) where Ak is defined in (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In our previous work [14], we proposed a coordinated multi-agent DRL approach to transform an MADRL problem to the distributed DRL problem similar to (3), where the ex- tracted information from neighboring cells is included into the state observation to better capture the inter-agent dependency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, training all local agents in parallel from scratch can be costly and time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Moreover, the trained models are sensitive to environment changes and the retraining cost can be high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, in this paper, we raise the following new questions: Can we reuse the knowledge in a pretrained model?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' When is the knowledge transferable?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' And, most importantly, how to transfer the gained knowledge from one agent to another?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The Transfer Learning Problem: To tackle the transfer learning problem, let us first introduce two definitions domain and task in the context of reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A domain D := {S, P(s)} consists of a state feature space S and its probability distribution P(s), for s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A task T := {A, π(·)} consists of the action space A and a policy function π : S → A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, our inter-agent transfer learning problem is to find the optimal source agent among a set of pretrained agents, and transfer its knowledge (pretrained model and collected instances) to the target agent, such that problem (3) can be solved in the target agent with fast convergence and limited amount of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In particular, the problem is defined in Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Problem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Given a set of pretrained source agents K ⊂ K with source domains D(S) := � D(S) i : i ∈ K � and pretrained tasks T (S) := � T (S) i : i ∈ K � , also given any target agent k /∈ K with target domain D(T ) k and untrained task T (T ) k , find the optimal source agent i∗ k ∈ K for target agent k to transfer knowledge such that i∗ k := arg max πk|π(0) k =Λ � π(S) i � ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' i∈K Eπk � T � t=0 γt krk � sk(t), ak(t) � � (4) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (sk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' ak) ∈ Γ � D(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' D(T ) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A(T ) k � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' where Λ � π(S) i � is the policy transfer strategy which maps a pretrained source policy π(S) i to the initial target policy π(0) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' JRLLC O&M Inter-cell inter-slice resource partitioning eMBB mMTO Slice resource budgets for each cell gNB 3 gNB 1 gNB 2while Γ � D(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' D(T ) k ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A(S) i ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A(T ) k � is the instance transfer strategy which selects the instances from the source agent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' combines them with the experienced instances from the target agent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' and saves them in the replay buffer for model training or fine-tuning in the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' More details about the transfer learning strategies will be given in Section IV-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' PROPOSED SOLUTIONS In this section, we first present a distributed MADRL approach to solve the slicing resource partitioning problem in (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, to solve problem (4) to find the optimal source agent, we propose a novel approach to inter-agent similarity analysis based on the extracted features using VAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Finally, for inter-agent transfer learning, we introduce transfer learning strategy which combines the model (policy) transfer and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Coordinated MADRL Approach As stated in (3) , the distributed DRL approach allows each agent to learn a local policy and makes its own decision on inter-slice resource partitioning based on local observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Compared with the centralized DRL approaches, distributed approaches reduce the state and action spaces and significantly accelerate the training progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, local observation alone cannot capture the inter-cell dependencies and provide sufficient information to achieve the globally optimal solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, we proposed in [14] a distributed DRL approach with inter-agent coordination which keeps the low model com- plexity while including the extracted information from neigh- boring cells to capture the inter-cell interference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We briefly summarize the coordinated distributed DRL approach below, because we would like to focus on the main contribution, namely, the inter-agent transfer learning, in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' For more details, readers are referred to our previous work [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Each local agent k observes a local state s′ k, which contains the following network measurements: Per-slice average user throughput {φk,n : n ∈ N};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Per-slice network load {lk,n : n ∈ N};' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Per-slice number of users {uk,n : n ∈ N}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, with the above-defined three slice-specific features, the local state s′ k has the dimension of 3N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Additionally, to better capture the inter-cell dependencies and estimate the global network performance, we introduce an inter-agent coordination mechanism through network information sharing among agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Let each agent k broadcast a message mk to its neighboring group of agents, denoted by Kk, which means, each agent k receives a collection of messages mk := [mi : i ∈ Kk] ∈ RZ(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Instead of using all received messages in mk, we propose to to extract useful information ck ∈ RZ(c) to remain the low model complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We aim to find an feature extractor g : RZ(m) → RZ(c) : mk → ck, such that Z(c) ≪ Z(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, we include the extracted features from the shared messages into the local state: sk := [s′ k, ck].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Knowing that the inter-agent dependencies are mainly caused by inter-cell interference based on cell load coupling [15], we propose to let each cell k share its per-slice load lk,n, ∀n ∈ N to its neighboring cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, we compute the extracted information ck as the average per-slice neighboring load.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Namely, we define a deterministic feature extractor, given by: Figure 2: Variational autoencoder gk :RN|Kk| → RN : [li,n : n ∈ N, i ∈ Kk] �→ ck(t) with ck(t) := � 1 |Kk| � i∈Kk li,n(t) : n ∈ N � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (5) With the extended local state including the inter-agent shared information, we can use classical DRL approaches, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', the actor-critic algorithms such as Twin Delayed Deep Deterministic policy gradient (TD3) [16] to solve (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Integrated TL with Similarity Analysis The distributed DRL approach introduced in Section IV-A allows us to derive a set of pretrained local agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Still, given a target cell k, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', a newly deployed cell, or an existing cell but with changed environment, more questions need to be answered: Can we transfer the prelearned knowledge from at least one of the pretrained agents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Which source cell provides the most transferable information?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' How to transfer the knowledge?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' To solve the transfer learning problem in (4), we develop a distance measure Di,k to quantify the inter-agent similarity between a source agent i and a target agent k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We aim to transfer the knowledge from the source agent with the highest similarity (reflected by the lowest distance measure).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The ideal approach to analyze the domain and task similar- ity between two agents is to obtain their probability distribu- tions of the state P(s) and derive the conditional probability distribution P(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, the major challenge here lies in the limited samples in the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Considering that the target agent is a newly deployed agent, there is no information available about its policy P(a|s), and P(s) is very biased, because all samples are collected under the default configurations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', constant actions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, we need to design a distance measure constrained by very limited and bias samples in the target agent, without any information about its policy P(a|s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Our idea is to derive and compare the joint state and reward distribution under the same default action a′, P (s, r|a = a′), in both source and target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The rationale behind this is that, when applying the actor-critic-based DRL architecture, the critic function estimates the Q value Qπ(a, s) based on action and state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Hence, the conditional probability P(r|s, a) should provide useful information of the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' With a = a′, we can consider to estimate P(r|s, a = a′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' To efficiently capture the informa- tion for both domain similarity (based on P(s|a = a′)) and task/policy similarity (based on P(r|s, a = a′)), we propose to estimate the joint probability P(s, r|a = a′) = P(r|s, a = a′)P(s|a = a′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Sample collection: To estimate the distance between P(s, r|a = a′) of both the source and target agents, we use all available samples from the target agent k under the default action a′, Xk = {(sk(n), rk(n))ak(n)=a′ : n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' , Nk}, and select a subset of the samples from the source agent i with Neural Neural Network Network u,o Decoder Encoderthe same default action Xi = {(si(n), ri(n))ai(n)=a′ : n = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' , Ni}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Note that in this subsection we slightly abuse the notation by using n as index of samples, and Nk as number of samples with default action collected from agent k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Feature extraction with VAE: To extract the representative features from the high-dimension vector [s, r], we propose to apply VAE [17] to map the samples into a low dimensional latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 2 illustrates, for each sample x := [s, r] ∈ X, the encoder of VAE estimates an approximated distribution P(z) in latent space Z as a multi-variate Gaussian distribution with N(µ, diag(σ)), where diag denotes the diagonal matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The decoder samples a latent variable z ∈ Z from the approximated distribution z ∼ N(µ, diag(σ)) and outputs a reconstructed sample ˆx by training on the following loss function: L :=∥x − ˆx∥2+ α · DKL (N(µ, diag(σ))∥N(0, diag(1))) , (6) where α is the weight factor and DKL denotes the Kullback- Leibler (KL) divergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Inter-agent similarity analysis: Since VAE does not di- rectly provide the probability distribution function P(x), we propose to utilize the extracted features in the latent space to evaluate the inter-agent similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Considering the limited amount of samples (only those under default action), we propose to train a general VAE model based on the samples from all candidate source agents and the target agent, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', X = � j∈K∪{k} Xj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The idea is to extract the latent features from samples from all relevant agents with a general encoder and to distinguish the agents within a common latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, for each sample xn ∈ X, we can derive its ex- tracted features, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', the posterior distribution P(zn|xn) = N(µn, diag(σn)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We denote the extracted latent space for agent k by Zk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Next, we can measure the inter-agent distance between an arbitrary source agent i and target agent k by calculating the KL divergence based on the extracted latent variables from their collected samples: Di,k := 1 NiNk � (µn,σn)∈Zi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (µm,σm)∈Zk DKL (N(µn, diag(σn))∥N(µm, diag(σm))) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (7) This requires to compute the KL divergence of every pair of samples (n, m) for n ∈ Xi and m ∈ Xk, which could be computing intensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Note that they are both Gaussian distributions, we can efficiently compute them with closed-form expression (as will be shown later in (8)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Besides, from our experiment, we observed that σn → 0 for nearly all the collected samples xn ∈ X, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', their variances are extremely small (to the level below 10e − 5 from our observation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, for our problem, we can use a trick to evaluate the distance measure more efficiently based on the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Given two multi-variate Gaussian distributions p = N(µn, Σn) and q = N(µm, Σm), where µn, µm ∈ RL, Σn = Σm = diag(σ) ∈ RL×L and every entry of σ is equal to a small positive constant σ ≪ 1, the KL divergence DKL(p||q) is proportional to �L l=1(µn,l − µm,l)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' It is easy to derive that DKL(p∥q) =1 2 � log |Σn| |Σm| − L+ (µn − µm)T Σ−1 m (µn − µm)+ Tr � Σ−1 m Σn � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (8) Because Σn = Σm = diag([σ2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', σ2]), we have the first term in (8) equals to 0, and the last term equals to L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, we obtain DKL(p∥q) = 1 2σ2 L � l=1 (µn,l − µm,l)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' (9) With Lemma 1, we can measure the distance between two agents more efficiently, based on the extracted µn and µm in the source and target latent spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Thus, to solve Problem (III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='1), we propose to choose the source agent: i∗ k := arg min i∈K Di,k, (10) where Di,k is computed based on (7) and (9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Integrated Transfer Learning Approach In general, the prelearned knowledge can be transferred from a source agent i to the target agent k with various policy transfer strategies Λ(·) and instance transfer strategy Γ(·): Model transfer: The policy transfer strategy Λ(·) simply initializes the target agent’s policy π(0) k by loading the pa- rameters (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', weights of the pretrained neural networks) of the pretrained policy π(S) i from the source agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Feature transfer: The policy transfer strategy Λ(·) keeps partial information extracted from the source agent’s pretrained policy π(S) i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In particular, the target agent loads partial of the layers (usually the lower layers) of the pretrained neural networks of π(S) i , while leaving the rest of them to be randomly initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, during training, the loaded layers are frozen and only the randomly initialized layers are fine-tuned with the instances newly collected by the target agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Instance transfer: The instance transfer strategy Γ(·) transfers the collected instances from the source agent i to the target agent k and saves them in the target agent’s replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Then, the target agent trains a policy from scratch with randomly initialized parameters and mixed instances collected from both source and target agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The above-mentioned knowledge from the source domain and source task can be transferred separately or in a combined manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In this paper, we propose the integrated transfer method with both model and instance transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Specifically, the target agent k initializes its local policy π(0) k by loading the pretrained policy of the source agent π(S) i and fine-tunes the policy by sampling from the replay buffer containing both types of instances: the instances transferred from the source agent and those locally experienced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Here, we skip the feature transfer because it practically performs well only when the similarity between the source domain/task and target domain/task is very high.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Although this assumption may hold for some regression and classification tasks, we empirically find that it fails in this context of MADRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' PERFORMANCE EVALUATION In this section, we evaluate the performance of the proposed solution within a system-level simulator [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The simulator Figure 3: Traffic mask to imitate the time varying network traffic Figure 4: Comparing reward during the training process Figure 5: Comparing CDF of minimum slice throughput satisfaction achieves a great accuracy in imitating the real network systems with configurable user mobility, network slicing traffic and topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In addition, we introduce a traffic-aware baseline which allocates resource proportionally to the data traffic demand per slice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Note that the baseline assumes perfect information about per-cell per-slice traffic demands, which provides already very good results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1) Network settings: We build a radio access network with 4 three-sector sites (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', K = 12 cells).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' All cells are deployed using LTE radio technology with 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='6 GHz under a realistic radio propagation model Winner+ [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Each cell has N = 4 slices with diverse per-slice requirements in terms of average user throughput and delay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In the cells with label 1, 2, 3, 7, 8, 9, we define per-slice average throughput requirements of φ∗ 1 = 4 MBit/s, φ∗ 2 = 3 MBit/s, φ∗ 3 = 2 MBit/s, and φ∗ 4 = 1 MBit/s respectively, and per-slice delay requirements of d∗ 1 = 3 ms, d∗ 2 = 2 ms, d∗ 3 = d∗ 4 = 1 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In the cells with label 4, 5, 6, 10, 11, 12, we define per- slice throughput requirements as φ∗ 1 = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 MBit/s, φ∗ 2 = 2 MBit/s, φ∗ 3 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 MBit/s, and φ∗ 4 = 1 MBit/s, and delay requirements of d∗ n = 1 ms, ∀n ∈ N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' All cells have the same radio bandwidth of 20 MHz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We define four groups of user equipment (UE) associated to four slices in each cell respectively, each UE group has the maximum size of 32 and moves randomly among the defined network scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' To mimic dynamic behavior of real user traffic, we apply a varying traffic mask τn(t) ∈ [0, 1] to each slice to scale the total number of UEs in each cell, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 3 shows the traffic mask in first 200 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 2) DRL training configuration: For MADRL training, we implemented TD3 algorithm at each local agent using multi- layer perception (MLP) architecture for actor-critic networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' In each TD3 model, both actor and critic neural works consist of two layers with the number of neurons as (48, 24) and (64, 24) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The learning rates of actor and critic are 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0005 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='001 accordingly with Adam optimizer and training batch size of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We set the discount factor as γ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='1, since the current action has stronger impact on instant network performance than future observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As for the training, for distributed DRL agents we applied 3000 steps for exploration, 5500 steps for training, and final 250 steps for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' For TL training process, we apply the same model setups as DRL approaches, while only setting 4000 steps for training and 250 for evaluation since knowledge transfer save the time for exploration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 3) Comparing DRL to TL aided approach: In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 4 we compare the evolution of reward during the training processes among the baseline, DRL approach (proposed in Section IV-A), and TL approaches when transferred from source agent with low and high similarity (proposed in Section IV-B and IV-C), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' For DRL, we present the first 4000 step, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', the same training time as TL approaches with solid line and the rest training curve with dashed line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 4, the distributed DRL approach learns to achieve similar reward as baseline after a lengthy exploration phase, while both TL approaches start with much higher start compared to DRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' After a short fine-tuning period, the TL approaches outperform the baseline with higher robustness, especially during the period with higher traffic demands and strong inter-cell interference where baseline has sharp performance degradation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Besides, in comparison between the TL from agents with different similarity measure, we observe that with higher similarity, TL provides higher start at the early stage of training, while both of them converge to similar performance after the training converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' For performance evaluation, we compare the statistical results on minimum per slice throughput satisfaction level and maximum per slice delay, respectively, among all cells among the methods baseline, distributed DRL and the proposed TL approach after convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 5 illustrated the empirical complementary cumulative distribution function (CDF) which equals 1 − FX(x) where FX(x) is the CDF of minimum per slice throughput satisfaction level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We observe that the TL approach provides the best performance comparing to others by achieving only about 12% fail to satisfy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='95 of the requirement, while converged DRL and baseline conclude 19% and 25% failure rate respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' By average satisfaction level, the TL approach conclude 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='92 while DRL and baseline only provide 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='90 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Similar observation can be made from Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 6, which illustrates the CDF of maximum slice delay in ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The TL approach provides 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 ms maximum average per-slice delay, while DRL achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='7 ms and baseline achieves 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='8 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 4) Inter-agent similarity analysis: We implemented the similarity analysis method introduced in Section IV-B with a VAE model in MLP architecture, both networks of encoder and decoder consist of 3 layers with number of neurons as (64, 24, 4) and (4, 24, 64) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' To achieve a good trade-off between low dimensional latency space and accurate reconstruction with VAE, we map the original sample x ∈ R17 to the latent variable z ∈ R4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 7 illustrates the results of inter-agent similarity analysis as a metric of distance measure proposed in (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' It shows that our proposed method can distinguish cells with different per- slice service quality requirements and gather the cells with similar joint state-reward distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 5) Dependence of TL performance on distance measure: In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 8 we compare the benefits of TL in training process by transferring knowledge from source agents with different average inter-agent distance measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The TL gains are 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='8 Traffic Mask 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='4 Slice 1 Slice 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='2 Slice 3 Slice 4 0 25 50 75 100 125 150 175 200 Timestamp0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='8 Reward 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='7 Baseline 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='6 DRL TL - High Sim 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 TL-Low Sim 0 2000 4000 6000 8000 Timestamp1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='9 Complementary 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='6 TL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 Baseline DRL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='975 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='950 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='925 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='900 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='875 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='850 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='825 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='800Figure 6: Comparing CDF of maximum slice delay Figure 7: Inter-agent distance measure Figure 8: TL performance gain depending on distance measure derived by comparing the reward to DRL approach at the same training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The results show that before 200 steps of TL training, the TL approaches with the lowest distance measure provides about 3% higher gain than the one with the largest distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' As the training process continues, the gains in all TL approaches increase with local fine-tuning and the difference between transferring from highly similar and less similar agents is getting smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, TL from the most similar agent proyvides higher gains for all training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 6) Key Takeaways: : We summarized the takeaways from numerical results as follows: All distributed DRL-based approaches achieve better per- slice network service than the traffic-aware baseline after convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' However, the TL schemes outperform the conventional DRL approach in terms of convergence rate, initial and converged performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Our propose VAE-based similarity measure well quan- tifies the distance between agents and can be used to suggest a mapping from the defined distance measure to the transfer learning performance gain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The difference between the gains achieved by TL from the highly similar and the less similar agents is more significant when the number of training steps is low (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', with limited online training samples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Although the advantage of transferring from a highly similar agent over a less similar agent decreases when the number of online training steps increases, a slight performance gain is always achieved by transferring knowledge from the most similar source agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' CONCLUSION In this paper, we formulated the dynamic inter-slice re- source partitioning problem to optimize the network require- ment satisfaction level of all slices in each cell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' To tackle the inter-cell interference, we proposed a coordinated MADRL method with the coordination scheme of information sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We proposed a novel integrated TL method to transfer the learned DRL policies among different local agents for accel- erating the policy deployment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' The method is accomplished by a new inter-agent similarity measurement approach and a new knowledge transfer approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' We evaluated the pro- posed solution with extensive simulations in a system-level simulator, where the results show our approach outperforms conventional DRL solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' ACKNOWLEDGMENT This work was supported by the German Federal Min- istry of Education and Research (BMBF) project KICK [16KIS1102K].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' REFERENCES [1] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Ding, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liu, “A constrained reinforcement learning based approach for network slicing,” in 2020 IEEE 28th International Conference on Network Protocols (ICNP), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [2] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Han, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Wang, “DeepSlicing: Deep reinforce- ment learning assisted resource allocation for network slicing,” in IEEE GLOBECOM, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [3] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Niyato, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Pei, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Wu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Jiang, “Deep reinforcement learning for user association and resource allocation in heterogeneous cellular networks,” IEEE Transactions on Wireless Communications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 18, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 5141–5152, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Shao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhao, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhang, “Graph attention network-based drl for network slicing management in dense cellular networks,” IEEE WCNC, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1–6, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [5] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Pan and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Yang, “A survey on transfer learning,” IEEE Transac- tions on knowledge and data engineering, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 22, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1345– 1359, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [6] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', “Transfer learning for future wireless networks: A comprehensive survey,” arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='07572, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Tian, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Si, “Transfer learning promotes 6g wireless communications: recent advances and future challenges,” IEEE Transactions on Reliability, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Parera, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', “Transfer learning for tilt-dependent radio map prediction,” IEEE Transactions on Cognitive Communications and Networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 829–843, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [9] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhuang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Qi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Duan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Xi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Xiong, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' He, “A comprehensive survey on transfer learning,” Proceedings of the IEEE, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 109, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 43–76, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [10] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Taylor and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Stone, “Transfer learning for reinforcement learning domains: A survey,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1633–1685, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [11] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Lin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Zhou, “Transfer learning in deep reinforcement learning: A survey,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='07888, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='org/abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='07888 [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Nagib, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Abou-zeid, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Hassanein, “Transfer learning- based accelerated deep reinforcement learning for 5G RAN slicing,” IEEE 46th LCN, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 249–256, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Mai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Yao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', “Transfer reinforcement learning aided distributed network slicing resource optimization in industrial IoT,” IEEE Trans- actions on Industrial Informatics, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [14] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Hu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Wellington, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Carle, “Inter-cell slicing resource partitioning via coordinated multi-agent deep reinforcement learning,” in IEEE ICC, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [15] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Cavalcante, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Liao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Sta´nczak, “Connections between spectral properties of asymptotic mappings and solutions to wireless network problems,” IEEE Transactions on Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 67, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 2747–2760, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Fujimoto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=', “Addressing function approximation error in Actor- Critic methods,” ArXiv, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' abs/1802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='09477, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [17] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Sohn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Lee, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Yan, “Learning structured output representation using deep conditional generative models,” in NIPS, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [18] Nokia Siemens Networks, White paper: Self-organizing network (SON): Introducing the Nokia Siemens networks SON suite-an efficient, future- proof platform for SON.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Technical report, October, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' [19] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Meinil¨a, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Ky¨osti, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Hentil¨a, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' J¨ams¨a, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Suikkanen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Kunnari, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Narandˇzi´c, Wireless World Initiative New Radio - Winner+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' Technical report, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0 CDF 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='9 1 Complementary 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='6 TL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='5 Baseline DRL 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0016 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='0018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='002 Max Slice Delay [in s]2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='175 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='150 4 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='125 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='100 7 080 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='075 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='025 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='000 1 2 5 6 > 10 11 1227 26 [in 25 Gain 24 after 100 steps TL after 200 steps 23 after 500 steps after1000 steps 22 after 2000 steps 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='011 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='081 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} +page_content='117 Distance Measure' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/stE1T4oBgHgl3EQfjgQE/content/2301.03262v1.pdf'} diff --git a/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf b/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4c576e327403f768ea6db8d300a8fc9a040f7225 --- /dev/null +++ b/u9FAT4oBgHgl3EQfiB2Y/content/2301.08597v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:74e000c2ae85ed6b7f4228981a7607b0490cbc86d981179a0418659290bd91b2 +size 1263826 diff --git a/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl b/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..1172a4de01de46e248a4c2a5b4d8cccb131ff563 --- /dev/null +++ b/u9FAT4oBgHgl3EQfiB2Y/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:72f0b5c0502f52c23d02c5350308728b92d11bec2ebecc8a6f964b2f89272fe4 +size 388396 diff --git a/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e14529883aba2a7b848f58b8232d89534d447fe --- /dev/null +++ b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/2301.05472v1.pdf.txt @@ -0,0 +1,1748 @@ +arXiv:2301.05472v1 [math.AP] 13 Jan 2023 +EXISTENCE OF SOLUTIONS TO A CLASS OF ONE-DIMENSIONAL MODELS FOR +PEDESTRIAN EVACUATIONS∗ +BORIS ANDREIANOV† AND THEO GIRARD ‡ +Abstract. +In the framework inspired by R. L. Hughes model (Transp. +Res. +B, 2002) for pedestrian evacuation in a +corridor, we establish existence of a solution by a topological fixed point argument. This argument applies to a class of models +where the dynamics of the pedestrian density ρ (governed by a discontinuous-flux Lighthill,Whitham and Richards model +ρt + (sign(x − ξ(t))ρv(ρ))x = 0 ) is coupled via an abstract operator to the computation of a Lipschitz continuous “turning +curve” ξ. We illustrate this construction by several examples, including the standard Hughes’ model with affine cost, and either +with open-end conditions or with conditions corresponding to panic behaviour with capacity drop at exits. Other examples put +forward versions of the Hughes model with inertial dynamics of the turning curve and general costs. +Key words. +crowd dynamics, pedestrian evacuation, Hughes’ model, capacity drop, existence, Schauder fixed-point, +admissible solution, discontinuous-flux conservation law, memory, relaxation +MSC codes. 35L65, 47H10 +1. Introduction. +1.1. The Hughes model and its variants. The Lighthill,Whitham and Richards (LWR) model for +traffic introduced in [18] and in [20] consists in a conservation law for the vehicule density ρ with a concave +positive flux ρv(ρ): +(1.1) +�ρt + [ρv(ρ)]x += +0 +ρ(t = 0, x) += +ρ0(x). +Here, we can suppose that the density ρ takes its values in [0, 1] and v stands for the speed of the traffic. This +model can be seen as the mass conservation equation where velocity v depends only on the traffic density +ρ. One frequently chooses v(ρ) = 1 − ρ up to a multiplicative constant representing the maximal velocity. +This describes a transport of the initial density of agents ρ0 at t = 0 towards x = +∞ where the speed is +decreasing when the density of agents is increasing. +Then, in [17], Hughes proposed a model of pedestrian evacuation as a system of two equations on ρ and +φ which is known as Hughes’ model. In the multi-dimensional model, ρ is the density of pedestrians with +respect to time t and space x. The dynamics of ρ is governed by LWR conservation laws with direction +field oriented towards the exits of a bounded domain Ω. In order to prescribe the direction towards the exit +preferred by a pedestrian at location x at a time t, Hughes defines φ(t, x), the “potential field” satisfying an +eikonal equation. The potential φ is zero on the exits located on ∂Ω. A pedestrian would then choose to +“descend the gradient” of this potential in order to leave the domain Ω by these exits. Theory of the Hughes’ +model is yet incomplete, even in one space dimension. In the 1D case, the model of [17] takes the form: +(1.2a) +(1.2b) +(1.2c) +(1.2d) + + + + + + + + + + + + + + + +ρt + [sign(−∂xφ)ρv(ρ)]x = 0 +ρ(t, x = ±1) = 0 +|∂xφ| = +1 +v(ρ) +φ(t, x = ±1) = 0. +This problem (1.2) is set up in a corridor with two exits; upon renormalization, we assumed that Ω = (−1, 1) +and that the exits are located at x = ±1. At t = 0 the pedestrians are distributed with a given density ρ0 +defined in [−1, 1] and at t > 0, the pedestrians want to leave the corridor by either one of the exits (as if a +∗Submitted to the editors DATE. +†Institut Denis Poisson CNRS UMR 7013, Université de Tours, Université d’Orléans, Parc Grandmont, 37200 Tours, France +and Peoples’ Friendship University of Russia (RUDN University) 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation +(Boris.Andreianov@lmpt.univ-tours.fr, https://www.idpoisson.fr/andreianov/). +‡Institut Denis Poisson, Université de Tours, Parc Grandmont, 37200 Tours, France (theo.girard@lmpt.univ-tours.fr). +1 + +2 +B. ANDREIANOV, T. GIRARD +fire alarm starts ringing at t = 0). The pedestrians move forward (with the positive flux ρ �→ +ρv(ρ)) or +backward (with ρ �→ −ρv(ρ) ) depending of the sign of ∂xφ. This results in (1.2a) being a discontinuous +flux LWR conservation law. The sign of ∂xφ is prescribed by the eikonal equation (1.2c) where c(ρ) = +1 +v(ρ) +is a cost function that is high where the crowd is slow. Consequently, the pedestrians tend to avoid those +“congested” regions. +The Dirichlet boundary condition (1.2b) on the density ρ is understood in the Bardos-LeRoux-Nédélec sense +standard for scalar conservation laws; it is shown in [5, Sect. 3] that upon extending ρ0 by the value zero on +R\[−1, 1], one can replace the initial-boundary value problem (1.2a)-(1.2b) with ρ0 : (−1, 1) −→ [0, 1] by the +pure intitial-value problem for (1.2a) with the extended datum ρ0 : R −→ [0, 1] (the extension means that +ρ0, now defined on R, is supported in [−1, 1]). We adopt this viewpoint and require, throughout the paper, +(1.3) +ρ0 ∈ L∞(R; [0, 1]), +ρ(x) = 0 for x /∈ [−1, 1]; +note that being compactly supported, ρ0 ∈ L1(R). Assumption (1.3) for the conservation law (1.2a) set up +in the whole space can be seen as “open-end condition” at exits; we refer to Section 4 for models with more +involved exit behavior. +In [13], the 1D Hughes’ model (1.2) has been reformulated in terms of a “turning curve” ξ(t) instead of the +potential φ. Following the turning curve approach, our prototype model in the sequel will be: +(1.4a) +(1.4b) + + + + + +ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 +� ξ(t) +−1 +c(ρ(t, x)) dx = +� 1 +ξ(t) +c(ρ(t, x)) dx. +with ρ defined for t ∈ [0, T ], T > 0, and x ∈ R and with initial datum of the form (1.3). Here c denotes +a generic cost function. It is proven in [13] that we can equivalently consider either the Hughes’ model +potential equation (1.2c)-(1.2d) or the reformulated problem (1.4b) with the cost function c(ρ) = +1 +v(ρ). +However, here, we will consider a cost verifying the following conditions: +(1.5) + + + +c ∈ W 1,∞([0, 1]), +∀ρ ∈ [0, 1], c(ρ) ≥ 1, +c is increasing on [0, 1]. +In (1.4), ρ is considered to be an entropy solution to (1.4a). +Such notion of solution with a particular +attention to the admissibility of the jump of ρ across the turning curve x = ξ(t) was proposed in [13] (we will +slightly simplify this solution notion). On the other hand, ξ is a pointwise defined solution to (1.4b) whose +existence in L∞ and uniqueness follows from the intermediate values theorem under the conditions (1.5). +In this paper, we will consider a class of “turning curve” model’s generalisations, keeping in mind the fact +that, even in the setting (1.4), little is known about the well-posedness of the Hughes’ model. For notation’s +sake, we consider f a generic concave positive flux such that f(0) = f(1) = 0 (one can assume f(ρ) = ρv(ρ) +to recover the LWR model): +(1.6a) +(1.6b) +(1.6c) + + + + + +ρt + [sign(x − ξ(t))f(ρ)]x = 0 +ρ(0, x) = ρ0(x) +ξ = I(ρ) +. +Here I is an abstract operator mapping the density ρ to a turning curve ξ. The problem (1.4) is a particular +case of (1.6) where I is the solver of the integral equation (1.4b). Stating (1.6b), we mean that ρ0 fulfills +(1.3) which corresponds to open-end evacuation at exits, as stated above. +Let us briefly discuss known results on the specific problem (1.4) and its variants. In [13] uniqueness is +proven for a definition of entropy solutions taking the discontinuity into account but considering ξ as being +given beforehand (we will revisit this result in Section 2). In [2] global existence for Hughes’ model (with +c(ρ) = +1 +v(ρ)) is proven if one assumes that the density at the turning curve is zero for all times. In [5], a + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +3 +uniqueness result in the same setting as this paper assuming moreover the BV regularity of the solutions +is provided. And in [23], [15] and [16] one can find numerical studies of the model. Proof of existence +and unicity for the regularized problem can be found in [12]. The Hughes’ model is also revisited with +different turning curve equation in [10] with numerical simulation. In this paper, the authors introduce a +regularization by convolution of the density named the subjective density. We also use the same type of idea +when applying our main result in the case of a general cost function c. The only general (with respect to +the choice of the initial data) existence result is contained in [5], where solutions with BVloc regularity away +from the turning curve were constructed via a well-chosen many-particle approximation. The result of [5] for +problem (1.4) is limited to the case of an affine cost c(ρ) = 1 + αρ. Our result for the original setting (1.4) +will also be limited to the affine cost case. But we provide a shorter and less specific argument, compared to +the many-particle approximation of [5], also we require fewer assumptions on the velocity profile v compared +to [5]. The fixed-point approach we develop appears to be rather flexible since it permits to handle several +models of the form (1.6). We also adapt the arguments to more realistic, in the setting of crowd evacuation, +exit behavior of the “capacity drop” kind (cf. [8, 7]). However, we highlight the fact that our approach +is restricted to situations where Lipschitz continuity of the turning curve ξ is guaranteed for the model at +hand, which appears to be a strong restriction on its applicability; this restriction also appears in [5]. +1.2. Abstract framework and general results. In this paper we propose an existence result elabo- +rated through a fixed-point argument to problem (1.6) under abstract assumptions on I. Roughly speaking, +we require that I maps any admissible solution ρ of the equation (1.6a) to a Lipschitz continuous turning +curve ξ. Furthermore, the Lipschitz constant of those turning curves must be uniformly bounded for any ρ. +We stress that the Hughes’ model with affine cost c(ρ) = 1 + αρ enters our abstract framework. However, it +is not clear whether, for general costs satisfying (1.5), the required Lipschitz bounds hold true. This issue +for the original Hughes’ model is left for further investigation. Models with more regular dependence of ξ +on ρ can be considered as well, including memory and relaxation effects, and for these models the Lipschitz +continuity of ξ is justifiable for general costs. +First, let’s introduce some notations that will be used throughout the whole paper. +• We denote {x < ξ(t)} := {(t, x) ∈ [0, T ] × R s.t. x < ξ(t)}. Analogously, we use {x = ξ(t)} and {x > ξ(t)}. +• For any r > 0, we write +BW 1,∞(0, r) := +� +ξ ∈ W 1,∞((0, T ), R) s.t. ∥ ˙ξ∥∞ + ∥ξ∥∞ ≤ r +� +. +• Analogously, we write BL1(0, r) for the set of ρ ∈ L1((0, T ) × R, [0, 1]) such that ∥ρ∥L1((0,T )×R) ≤ r. +In problem (1.6), ρ is taken as an admissible solution to the discontinuous flux LWR equation (1.6a). On +the way of proving the existence result, we propose and use a slightly simpler notion of admissible solution +for this equation than the notion used in [13], [2] and [1]. Those notions of solution are equivalent. +Definition 1.1. Let ξ ∈ W 1,∞((0, T )). Let ρ0 ∈ L1(R, [0, 1]). Let f be a concave positive flux such that +f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ). +We say that ρ ∈ L1((0, T ) × R, [0, 1]) is an admissible solution to: +(1.7) +�ρt + F(t, x, ρ)x = 0 +ρ(t = 0, ·) = ρ0(·) +if +• For all φ ∈ C∞ +c ((0, T ) × R), +(1.8) +�� +Ω +ρφt + F(t, x, ρ)φx dt dx = 0 +• For all positive φ ∈ C∞ +c ({x < ξ(t)} (resp. φ ∈ C∞ +c ({x > ξ(t)}) ), for all k ∈ [0, 1], +(1.9) +− +�� +Ω +|ρ − k| φt + q(ρ, k)φx dt dx − +� +R +|ρ0 − k|φ(0, x) dx ≤ 0 + +4 +B. ANDREIANOV, T. GIRARD +where we set +(1.10) +q(u, v) := sign(u − v) [F(t, x, u) − F(t, x, v)] +Note that the notion of solution makes sense for arbitrary initial datum ρ0 ∈ L1(R, [0, 1]) but in order to +keep consistency with the standard Hughes’ setting, we will restrict our attention to data ρ0 that fulfill (1.3). +Remark 1.2. Note that in the above definition, no admissibility condition is prescribed at {x = ξ(t)}. Only +the conservativity (the Rankine-Hugoniot condition following from (1.8)) is required at the location of the +turning curve. +Remark 1.3. Definition 1.1 implies that ρ ∈ C0([0, T ], L1(R)). This is proved by an adapted version of the +one in [9]. Such an adapted proof can be found in [21]. Remembering this fact makes sense of the notation +ρ(t, ·) without ambiguity. +For a given (and fixed) ξ ∈ W 1,∞((0, T )), it is shown this notion of solution gives a well-posed discontinuous +flux conservation law in L1((0, T ) × R) when ρ0 belongs to L1(R; [0, 1]). We then define the solver operator: +(1.11) +S0 : +� +W 1,∞((0, T )) −→ L1((0, T ) × R) +ξ �→ ρ. +This operator S0 maps ξ a turning curve to S0(ξ) = ρ the unique admissible, in the sense of Definition 1.1, +solution to (1.6a)-(1.6b) set up in the whole one-dimensional space. +Remark 1.4. The uniqueness of a solution in the sense of Definition 1.1 still holds for +F(t, x, p) := 1{x<ξ(t)}fL(p) + 1{x>ξ(t)}fR(p) +where fL (resp. fR) is a convex negative (resp. concave positive) flux such that fL(0) = fL(1) = fR(0) = +fR(1) = 0. These are the core properties of the fluxes on which rely our proof. For instance, modeling a +slanted corridor, we can consider fL,R(ρ) := vL,R ρ(1−ρ) where vL and vR are positive constants accounting +for the difference in speed for a pedestrian when moving to the right or the left exit. +We now present the notion of solution used for the generalized Hughes’ model given by system (1.6). Recalling +Remark 1.3, it makes sense for the operator equation (1.6c) to be verified for all t ∈ [0, T ]. In fact, we will +require that ξ ∈ W 1,∞((0, T )) in order to obtain our main result. We then use the classical embedding result +to identify ξ with a unique element of C0([0, T ]). +Definition 1.5. Consider I : L1((0, T ) × R) −→ C0([0, T ]). We say that (ρ, ξ) is a solution to generalized +Hughes’ model (1.6) if ρ is a solution to (1.6a)-(1.6b) in the sense of Definition 1.1 and moreover, the +equality ξ = I(ρ) holds in C0([0, T ]). +Notice that such a solution can be seen as a fixed point of the composed operator S0◦I. In order to prove the +existence of a solution, we prove a variant of the Schauder’s fixed point Theorem (see [25]). To be specific, +denoting by I : ρ �→ ξ the operator that serves to compute the interface and by D : ξ �→ ρ the one that +serves to compute the density, we prove the following statement: +Lemma 1.6. Let (X, ∥·∥X) be a Banach space, (Y, ∥·∥Y ) a metric space and K a compact subset of Y . Take +D : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) a continuous operator. Assume there exists B a bounded closed convex subset +of X such that: +I : (B, ∥ · ∥X) −→ (K, ∥ · ∥Y ) is a continuous operator +(1.12a) +D ◦ I(B) ⊂ B +(1.12b) +Then D ◦ I admits a fixed point in B. +Remark 1.7. We stress that the assumption (1.12a) implies that, on the subset B, I takes its values in K, +making D ◦ I well-defined on B. + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +5 +The assumptions of Lemma 1.6 permit us to formulate sufficient conditions for the existence of a solution in +the sense of Definition 1.5. Specifically, the use of the sets BW 1,∞(0, r) (as K) and C0([0, T ]) (as Y ) is the +key to the application of Schauder fixed-point argument to S0 ◦ I under reachable assumptions on I in the +Hughes’ model framework. +We prove in Section 2 the following proposition saying that S0 is continuous. This continuity matches with +the one required for the operator D in the above lemma. +Proposition 1.8. Let ρ0 verify (1.3). If f satisfies the non-degeneracy condition: +(1.13) +meas +� +x ∈ [−∥ρ∥∞; |ρ∥∞] s.t. f ′(x) = 0 +� += 0 +then the solver operator S0 : (W 1,∞((0, T ), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous. +Combining previous results, we state the main result of this paper: +Theorem 1.9. Let ρ0 verify (1.3). Let B a convex closed bounded subset of L1((0, T ) × R) and +I : (B, ∥ · ∥L1((0,T )×R)) −→ +(C0([0, T ], R), ∥ · ∥∞) +be a continuous operator. Assume that f verifies (1.13). If there exists r > 0 such that: +I(B) ⊂ BW 1,∞(0, r) +(1.14a) +∀ξ ∈ BW 1,∞(0, r), the unique admissible solution to ρt + [sign(x − ξ(t))f(ρ)]x = 0 is in B +(1.14b) +then there exists (ρ, ξ) a solution to the problem (1.6) in the sense of Definition 1.5. +Remark 1.10. One can interpret B as the set where one looks for solutions to (1.6a). +The central point in order to use this theorem is to construct the set B; in below applications, two different +choices for B are encountered. +1.3. Applications. We search for properties of admissible solution in the sense of Definition 1.1 that +are independent of ξ. These properties, included in the construction of B must guarantee that I(B) verifies +(1.14a) but also that B is convex, bounded and closed in L1((0, T ) × R). In this subsection, we present three +applications of Theorem 1.9. +First, we consider the operator I0 associated to the problem (1.4b) with affine cost function (further detailled +in Section 3). Let us exhibit the construction of B1 a set satisfying the conditions (1.14b)-(1.14a) for this +choice of I. Notice that, thanks to the L1-contraction property of the admissible solution ρ that is justified +within the uniqueness proof in Section 2, we have: +∀t ∈ [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R) +⇒ ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R) +(1.15) +Furthermore, we prove that for a certain fixed constant C > 0 (which value will be made precise later), for +any ξ ∈ W 1,∞, a weak solution to (1.6a) in the sense (1.8) verifies (see Lemma 3.2 and also [5]): +(1.16) +∀a, b ∈ R, ∀s, t ∈ [0, T ], +����� +� b +a +ρ(t, x) − ρ(s, x) dx +����� ≤ C|t − s|. +Finally, considering an inital datum 0 ≤ ρ0 ≤ 1, we set: +(1.17) +B1 = +� +ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1 and ρ verifies (1.16) +� +. +Applying Theorem 1.9 with B1 given by (1.17) we get: +Proposition 1.11. Assume that I0 : B1 −→ +C0([0, T ], R) is the operator associated with equation (1.4b) +with affine cost c(ρ) = 1 + αρ. If f verifies (1.13), then there exists (ρ, ξ) a solution to the problem (1.4) in +the sense of Definition 1.5. + +6 +B. ANDREIANOV, T. GIRARD +As a second case, we treat Iδ the operator associated with a modified version of equation (1.4b) where ρ is +replaced by an average density over recent past in equation (1.4b) (see (1.4b’)). This modification is inspired +by the use of “subjective density” in pedestrian and traffic flows, proposed, e.g., in [10] and [8, 7] (cf. Section 4 +where subjective densities are used to model constrained evacuation at exits); this choice introduces inertia +effect into agents’ perception of the crowd densities. In that setting, we can prove that the image of Iδ +is contained in a bounded subset of W 1,∞((0, T )) without requiring the property (1.16). Consequently, we +recover the global existence result for any cost c verifying (1.5) with the set B2 merely given by: +B2 = +� +ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1 +� +. +As a third example, we consider �Iǫ the operator associated with problem (1.4b) with a relaxed equilibrium, +modeling, in a way different from Iδ, inertia effect of the interface dynamics. In this case, the set B2 also +satisfies all the conditions in order to apply Corollary 1.9. +Finally, another series of applications (which is an extension of all the previous results to models with +different, phenomenologically relevant behavior of agents in exits) is provided in Section 4. +1.4. Outline. In Section 2, we prove the main results of this paper, respectively Theorem 1.9 and +Lemma 1.6, Proposition 1.8. These proofs hold in an abstract framework where the choice of I and B are +not prescribed. Then, in Section 3, we detail the construction involving the set B1 satisfying the assumptions +of Theorem 1.9 in the case of I0 being the operator associated with equation (1.4b) with affine cost. We also +discuss the case of a general cost satisfying (1.5) and solve it for the modified operators Iδ and �Iǫ using the +set B2. Eventually, in Section 4, we extend Theorem 1.9 in a situation with constrained evacuation at exits +x = ±1. +2. Proof of the main result. We first deduce Lemma 1.6 from the Schauder fixed-point theorem. +Proof of Lemma 1.6. We recall that, thanks to condition (1.12a), D ◦ I is well defined. What’s more, D and +I are continuous. So D ◦ I is continuous from B into itself. Take any subset A of B. The set I(A) ⊂ K +is a relatively compact set in (Y, ∥ · ∥Y ). Since D is continuous from (K, ∥ · ∥Y ) into (X, ∥ · ∥X), D ◦ I(A) +is a relatively compact subset of X. We consequently have D ◦ I a compact operator from B into itself. +Furthermore B is bounded closed convex subset of a Banach space X. +We apply Schauder fixed-point +theorem (see [25]) and conclude to the existence of a fixed point in B. +In order to apply Lemma 1.6 with D = S0 the solver associated with the notion of solution of Definition +1.1 ( see (1.11) ), we first need to check that S0 is well defined from W 1,∞((0, T )) into L1((0, T ) × R) when +∥ρ0∥L1(R) < +∞. This is equivalent to well-posedness for the problem (1.7). +We prove below that, thanks to the particular choice of fluxes on each side of the turning curve (emphasized +in Remark 1.4), Definition 1.1 is restrictive enough to grant uniqueness. This notion of solution is however +less restrictive than the one proposed in [13, 1]. It implies that both notions are equivalent, also the existence +of such solutions is then directly inherited from the proof found in [1]. Note that one can prove the existence +result for our notion of solution through the convergence of a finite volume scheme (we do so in Section 4, +in the context of flux-limited exit behavior at the exits x = ±1). +Theorem 2.1. Let ρ,ˆρ be two entropy solutions in the sense of Definition 1.1 with initial datum ρ0 (resp. +ˆρ0). Let Lf be the lipschitz constant of f. If ξ ∈ W 1,∞((0, T )), we have: +for a.e. t ∈ [0, T ], ∀a, b ∈ R, +� b +a +|ρ(t, x) − ˆρ(t, x)|dx ≤ +� b+Lft +a−Lft +|ρ0(x) − ˆρ0(x)|dx. +In particular, there exists at most one entropy solution associated to a given initial datum ρ0. +In order to prove this Theorem, we introduce notation for the right and left strong traces of ρ along a +Lipschitz curve ξ. Let ξ ∈ W 1,∞((0, T ), R). Then, γLρ(t) ∈ L∞((0, T )) (resp. γRρ(t) ) is such that, for any +φ ∈ C0([0, 1]), +ess lim +ǫ→0+ +1 +ǫ +� T +0 +� ξ(t) +ξ(t)−ǫ +|φ(ρ(t, x)) − φ(γLρ(t))| dx dt = 0 + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +7 +� +respectively, ess lim +ǫ→0+ +1 +ǫ +� T +0 +� ξ(t)+ǫ +ξ(t) +|φ(ρ(t, x)) − φ(γRρ(t))| dx dt = 0 +� +The existence of those traces is proven in [24]. +Remark 2.2. Generalization of the approach of the present paper to general cost function c, for the original +Hughes’ model, may require going below the Lipschitz regularity of ξ. In this respect, let us point out that +extension of the above uniqueness claim to W 1,1 regularity of ξ is feasible, while weakening the regularity of +ξ even more presents a serious difficulty for the theory of discontinuous-flux conservation laws [4]. +Proof of Theorem 2.1. Remembering Remark 1.4 and for a more comprehensive presentation of the proof, +we denote fR = f and fL = −f. +To main idea of the proof consists of using Kruzkhov’s doubling variable technique (see [14]) on each side +of the curve {x = ξ(t)}. Since ξ is Lipschitz continuous we can join both pieces getting left and right traces +along this turning curve, following the general approach as in [4, 8]. We get, for any φ ∈ D+, +(∗) +− +�� +Ω +|ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ +� T +0 +φ(t, ξ(t)) [qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ)] +where qL,R(ρ, ˆρ) := sign(ρ − ˆρ) +� +fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t)(ρ − ˆρ) +� +. +On another side, using traces’ existence, we also recover from (1.8) the Rankine-Hugoniot condition: +(∗∗ρ) +for a.e. t ∈ (0, T ), fR(γRρ(t)) − ˙ξ(t)γRρ(t) = fL(γLρ(t)) − ˙ξ(t)γLρ(t) +We also have the analogous relation for ˆρ that we denote (∗∗ˆρ). +Fix t ∈ (0, T ) such that (∗∗ρ) and (∗∗ˆρ) are true. We denote the set of values for γLρ (resp. γRρ) that verify +(∗∗ρ): +ΓL,R := +� +a ∈ R s.t. ∃b ∈ R, fL,R(a) − ˙ξ(t)a = fL,R(b) − ˙ξ(t)b +� +. +Due to the particular choice of the pair of fluxes (fL, fR), those sets are non-empty. Its geometries are +pictured below. +ΓR +ΓL +y = fL(x) − ˙ξ(t)x +y = fR(x) − ˙ξ(t)x +Recalling the properties of fL and fR emphasized in Remark 1.4 and using the signs of f ′ +L and f ′ +R, we let the +reader verify that, for any ˙ξ(t), x �→ fR(x) − ˙ξ(t)x has the same monotonicity on ΓR as x �→ fL(x) − ˙ξ(t)x +on ΓL. +Consequently, if (γLρ, γRρ) verifies (∗∗ρ) and (γL ˆρ, γRˆρ) verifies (∗∗ˆρ), +• sign(γRρ − γRˆρ) sign +� +fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRˆρ) +� += sign(γLρ − γL ˆρ) sign +� +fL(γLρ) − fL(γL ˆρ) − ˙ξ(t)(γLρ − γLˆρ) +� +• (∗∗ρ)-(∗∗ˆρ) implies that +fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRv) = fL(γLρ) − fL(γLˆρ) − ˙ξ(t)(γLρ − γLˆρ). + +8 +B. ANDREIANOV, T. GIRARD +Therefore we have: +for a.e. t ∈ (0, T ), qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ) = 0. +Consequently, from (∗), we recover the global Kato’s inequality: for any φ ∈ D+(Ω), +− +�� +|ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ 0. +The remaining arguments are identical to the classical framework of Kruzkhov. Integrating on the trapezoid +1[0,t](s)1[a−Lf(t−s),b+Lf(t−s)](x), Lf being the Lipschitz constant of f, we get the localized L1 contraction +property: +(2.1) +� b +a +|ρ(t, x) − ˆρ(t, x)|dx ≤ +� b+Lft +a−Lft +|ρ(0, x) − ˆρ(0, x)|dx. +Consequently, the solver operator S0 is well defined from W 1,∞((0, T )) into L1((0, T )×R). In order to apply +Lemma 1.6 with D = S0 : +� +W 1,∞((0, T )), ∥ · ∥∞ +� +−→ +� +L1((0, T ) × R), ∥ · ∥L1((0,T )×R) +� +, we also show the +continuity of this operator. Let’s denote for any a < b ∈ R, s < t ∈ [0, T ], the trapezoid: +(2.2) +T s,t +a,b := +� +(τ, x) ∈ (0, T ) × R s.t. τ ∈ [s, t], x ∈ (a + (τ − s)Lf , b − (τ − s)Lf) +� +, +where Lf is the Lipschitz constant of f. We isolate the following useful lemma that comes from (2.1). +Lemma 2.3. Let ρ0 satisfy (1.3), ξ ∈ W 1,∞((0, T )) and ρ be the entropy solution in the sense of Definition +1.1 to (1.7) on (0, T ) × R. Denote ˆρ the Kruzhkov entropy solution on (s, t) × R to 1 +� +ˆρt + f(ˆρ)x = 0 +ˆρ(s, ·) = ρ(s, ·)1(a,b)(·). +Then, for any a < b ∈ R, s < t ∈ [0, T ], there holds +(2.3) +T s,t +a,b ⊂ {x > ξ(t)} =⇒ ρ = ˆρ a.e. on T s,t +a,b . +Proof. This lemma immediatly follows from (2.1). +We now prove Proposition 1.8 using this lemma. +Proof of Proposition 1.8. Consider (ξn)n∈N and ξ ∈ W 1,∞((0, T )) such that ∥ξn − ξ∥∞ −→ 0. We denote +ρn := S0(ξn). Let K a compact subset of {x > ξ(t)}. Let ǫ > 0 such that K ⊂ {x > ξ(t) + ǫ}. +We cover K by a finite number of trapezoids of the form (2.2). Without loss of generality we can suppose +that each trapezoid is contained in {x > ξ(t) + ǫ}: +K ⊂ +� +i∈I +T si,ti +ai,bi ⊂ {x > ξ(t) + ǫ} , Card(I) < +∞. +Since ∥ξn − ξ∥∞ −→ 0, for any ǫ > 0, there exists n0 ∈ N such that ∀t ∈ [0, T ], n ≥ n0 ⇒ |ξn(t) − ξ(t)| ≤ ǫ. +This implies ξn(t) ∈ [ξ(t) − ǫ; ξ(t) + ǫ]. Then, +∀x ∈ R\[ξ(t) − ǫ; ξ(t) + ǫ] , sign(x − ξn(t)) = sign(x − ξ(t)). +(2.4) +Then, for such a n0, for any n ≥ n0, each trapezoid T si,ti +ai,bi ⊂ {x > ξn(t)}. Using Lemma 2.3, for any n ≥ n0, +ρn is equal almost everywhere in T si,ti +ai,bi to the Kruzhkov entropy solution of: +� +ρt + f(ρ)x = 0 +ρ(si, ·) = ρn(si, ·)1(ai,bi)(·). +1Here ρ(s, ·) is understood in view of s being a Lebesgue’s point of ρ ∈ L∞((0, T), L1(R)). Recalling Remark 1.3, this is in +fact true for any s ∈ [0, T]. + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +9 +We are now in a position to apply the averaging compactness lemma (see Theorem 5.4.1 in [19]) on the +trapezoid T s0,t0 +a0,b0 . We get a subsequence (ρnk)k∈N that converges in L1(T s0,t0 +a0,b0 ). We then apply the averaging +compactness lemma with (ρnk)k on T s1,t1 +a1,b1 . Repeating this process for each i ∈ I, we recover a subsequence +(ρnj)j that converges in L1(� +i∈I T si,ti +ai,bi ). Then (ρnj)j converges in L1(K). +To conclude, we point out that this reasoning holds for any K ⊂ {x > ξ(t)}. This is also true for compact +subsets of {x < ξ(t)}. Since ξ is Lipschitz, meas({x = ξ(t)}) = 0. Consequently there exists a subsequence +(ρnk) that converges almost everywhere on (0, T ) × R and in L1 +loc((0, T ) × R). Moreover, we have ρnk −→ ρ +in L1((0, T ) × R) because for [a, b] ∩ [−1, 1] = ∅, ρn = 0 on T 0,T +a,b +, due to the choice of ρ0 verifying (1.3). +Now, ρ is actually S0(ξ). Indeed, recall that ρ has no admissibility condition to satisfy on {x = ξ(t)} beyond +the Rankine-Hugoniot relation. Then, we can pass to the limit in the entropy inequalities (1.9) (where, for +n large enough, the support of the test function does not intersect the curve {x = ξn(t)} for t ∈ [0, T ]) and +pass to the limit in (1.8) by dominated convergence. +This reasoning can be reproduced for any subsequence of (ρn)n. Thanks to a classical argument of compacity, +if any converging subsequence (S0(ξnk))k∈N converges to S0(ξ), the whole sequence (S0(ξn))n converges in +L1 to S0(ξ). So S0 : (W 1,∞((0, T )), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous. +We now combine all the previous results to get existence of a solution in the sense of Definition 1.5. +Proof of Theorem 1.9. Suppose there exists r > 0 such that (1.14a)-(1.14b) are verified. +Using the notations of Theorem 1.6 we take: +• Y = (C0([0, T ]), ∥ · ∥∞) +• X = (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) +• K as the compact set of C0([0, T ]) obtained as the image of BW 1,∞(0, r) under the standard embedding. +Using Proposition 1.8 and Theorem 2.1, we know that S0 : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) is well defined and +continuous. Further, notice that condition (1.14a) is equivalent to (1.12a) and that condition (1.14b) implies +(1.12b). We are now in a position to use Lemma 1.6. We conclude to the existence of a solution to (1.6) in +the sense of Definition 1.5. +3. Lipschitz continuity of the turning curve: +examples. In this section, we will enumerate +examples of the abstract problem (1.6) + + + + + +ρt + [sign(x − ξ(t))f(ρ)]x = 0 +ρ(0, x) = ρ0(x) +ξ = I(ρ), +where we can construct a set B such that the prescribed operator I satisfies the required properties in order +to apply Theorem 1.9; this includes the original Hughes’ model (1.4) with affine costs and its modifications, +taking into account time-inertia effects and allowing for general costs. Note that further examples, with +modified exit conditions, are considered in Section 4. For such examples, we exhibit the construction of this +set. Consequently, we get existence of a solution in the sense of Definition 1.5 in those situations. +3.1. Hughes’s model with affine cost. We first consider the model (1.4): + + + + + +ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 +� ξ(t) +−1 +c(ρ(t, x))dx = +� 1 +ξ(t) +c(ρ(t, x))dx, +with initial datum satisfying (1.3) where we choose, for some α > 0, +(3.3) +c(p) = 1 + αp. +First, let us recall the definition of the set B1 constructed in the introduction: +(1.17) +B1 = +� +ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1 and ρ verifies (1.16) +� +. + +10 +B. ANDREIANOV, T. GIRARD +In this setup, we have the following proposition: +Proposition 3.1. Assume the cost is given by (3.3). Then the following properties hold: +1. For any ξ ∈ W 1,∞((0, T )), S0(ξ) ∈ B1. +2. There exists r +> 0 such that, for any ρ ∈ B1, there exists a unique solution ξ ∈ BW 1,∞(0, r) to +(1.4b). We denote I0 the operator that maps ρ ∈ B1 to ξ the unique solution to (1.4b). Consequently, +this operator is well defined and monovaluated. +3. I0 : (B1, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞([0, T ]), ∥ · ∥∞) is continuous. +4. B1 is closed convex and bounded in L1((0, T ) × R). +Consequently, I0 verifies (1.14a)-(1.14b) for the set B1. We apply Theorem 1.9 and get the desired existence +of a solution for the problem (1.4) with affine cost (3.3). That proves Proposition 1.11. +In order to prove of Proposition 3.1, we rely on two lemmas that we chose to isolate in order to use them in +the other examples. +Lemma 3.2. Let a, b ∈ R, a < b. Let s, t ∈ [0, T ], s < t. Fix ξ ∈ W 1,∞((0, T )). We denote ρ a solution in +the sense of Definition 1.1. Then, there exists C > 0, independent of a, b, s, t, ξ and ρ, such that: +(3.4) +����� +� b +a +ρ(t, x) − ρ(s, x) dx +����� ≤ C|t − s|. +We recall that there’s no ambiguity in considering ρ(t, .) since ρ ∈ C0([0, T ], L1(R)) (see Remark 1.3). +Proof of Lemma 3.2. Let (κn)n∈N be a mollifier. We set +Ψ(τ, x) := 1[a,b](x)1[s,t](τ) +and φ(τ, x) := Ψ ∗ κn(τ, x). +Using φ as test function in (1.8), making n −→ +∞ we get: +� b +a +ρ(s, x) − ρ(t, x) dx + +� t +s +F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ = 0 +Consequently, +����� +� b +a +ρ(t, x) − ρ(s, x) dx +����� ≤ +���� +� t +s +F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ +���� ≤ +� +2 sup +p∈[0,1] +|f(p)| +� +|t − s| +Lemma 3.3. Let s < t ∈ [0, T ]. Let ξ be a solution to (1.4b). We denote +¯ +ξ := min(ξ(t), ξ(s)) and ¯ξ := +max(ξ(t), ξ(s)). Then +(3.5) +2 |ξ(t) − ξ(s)| ≤ +����� +� +¯ +ξ +−1 +c(ρ(t, x)) − c(ρ(s, x)) dx − +� 1 +¯ξ +c(ρ(t, x)) − c(ρ(s, x)) dx +����� +Proof of Lemma 3.3. We first treat the case ξ(s) ≤ ξ(t). +We have: +� ξ(s) +−1 +c(ρ(s, x)) dx = +� ξ(t) +ξ(s) +c(ρ(s, x)) dx + +� 1 +ξ(t) +c(ρ(s, x)) dx +� ξ(s) +−1 +c(ρ(t, x)) dx = − +� ξ(t) +ξ(s) +c(ρ(t, x)) dx + +� 1 +ξ(t) +c(ρ(t, x)) dx +If we substract both equalities, +� ξ(t) +ξ(s) +c(ρ(s, x)) + c(ρ(t, x)) dx = +� ξ(s) +−1 +c(ρ(s, x)) − c(ρ(t, x)) dx − +� 1 +ξ(t) +c(ρ(s, x)) − c(ρ(t, x)) dx + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +11 +On the contrary, if ξ(s) ≥ ξ(t), with an analogous argument we get: +� ξ(s) +ξ(t) +c(ρ(s, x)) + c(ρ(t, x)) dx = +� ξ(t) +−1 +c(ρ(t, x)) − c(ρ(s, x)) dx − +� 1 +ξ(s) +c(ρ(t, x)) − c(ρ(s, x)) dx +Using the fact that c ≥ 1 we get: +2|ξ(t) − ξ(s)| = 2(¯ξ − +¯ +ξ) +≤ +� ¯ξ +¯ +ξ +c(ρ(s, x)) + c(ρ(t, x)) dx ≤ +����� +� +¯ +ξ +−1 +c(ρ(s, x)) − c(ρ(t, x)) dx − +� 1 +¯ξ +c(ρ(s, x)) − c(ρ(t, x)) dx +����� +We are now ready to prove Proposition 3.1. +Proof of Proposition 3.1. First, consider ρ0 satisfying (1.3). Using ˆρ = 0 in (2.1), we prove that for all t in +[0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R). This readily yields: +∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R). +(1.15) +Combining this result with Lemma 3.2, we prove the first assertion of Proposition 3.1. +Second, fix ρ ∈ B1. We prove existence and uniqueness of ξ ∈ L∞([0, T ]) satisfying (1.4b) for any t ∈ [0, T ]. +Let t ∈ [0, T ], we set: +Ψ+(a) := +� a +−1 +c(ρ(t, x)) dx, Ψ−(a) := +� 1 +a +c(ρ(t, x)) dx. +One can notice that, because c > 0, Ψ+ is a continuous strictly increasing function, while Ψ− is continuous +and strictly decreasing on [−1, 1]. Therefore, a �→ Ψ+(a) − Ψ−(a) is continuous, strictly increasing, negative +at a = −1 and positive at a = 1. Consequently, there exists only one ˜a ∈ (−1, 1) such that Ψ+(˜a) = Ψ−(˜a). +This can be done for any t ∈ [0, T ]. Consequently, we get existence and unicity of ξ ∈ L∞. +We now prove that ξ ∈ W 1,∞([0, T ]). Using Lemma 3.3 we get: +2 |ξ(t) − ξ(s)| ≤ +����� +� +¯ +ξ +−1 +c(ρ(t, x)) − c(ρ(s, x)) dx − +� 1 +¯ξ +c(ρ(t, x)) − c(ρ(s, x)) dx +����� +≤ α +����� +� +¯ +ξ +−1 +ρ(t, x) − ρ(s, x) dx +����� + α +���� +� 1 +¯ξ +ρ(t, x) − ρ(s, x) dx +���� +And using Lemma 3.2, with the choice (3.3) of the cost, we get: +2 |ξ(t) − ξ(s)| ≤ 2αC |t − s| +We conclude that taking r = αC, one guarantees that ξ is always in BW 1,∞(0, r). +We now prove the continuity of the operator I0. Let’s consider ρ, ρn ∈ B1. Then, for a given t ∈ [0, T ], +using (1.4b) for both ξ := I0(ρ) and ξn := I0(ρn), we recover: +� ξ(t) +ξn(t) +c(ρ) + +� ξn(t) +−1 +c(ρ) − +� ξn(t) +−1 +c(ρn) = +� ξn(t) +ξ(t) +c(ρ) + +� 1 +ξn(t) +c(ρ) − +� 1 +ξn(t) +c(ρn) +And rearranging the integrals, we get: +2 +� ξ(t) +ξn(t) +c(ρ) = +� 1 +−1 +[c(ρ) − c(ρn)] sign(x − ξn(t)). + +12 +B. ANDREIANOV, T. GIRARD +Notice that +� T +0 +|ξ − ξn| ≤ +� T +0 +����� +� ξn(t) +ξ(t) +c(ρ) +����� ≤ 1 +2 +� T +0 +���� +� 1 +−1 +sign(x − ξn(t)) [c(ρ) − c(ρn)] +���� +≤ 1 +2 +� T +0 +� 1 +−1 +|c(ρ) − c(ρn)| ≤ α +2 +� T +0 +� 1 +−1 +|ρ − ρn| . +Consequently, if ∥ρ − ρn∥L1((0,T )×R) −→ 0, +∥ξ − ξn∥L1((0,T )) −→ 0. +We recall, that ξ, ξn ∈ I0(B1) are r-Lipschitz. On any open subset of [0, T ] there exists a point t where the +continuous function ξ(·) − ξn(·) is less or egal to its L1-average. Using the fact that [0, T ] can be covered +by a finite ǫ-network and that the derivative of ξ(·) − ξn(·) is bounded on this network, we recover that +∥ξ − ξn∥∞ −→ 0 when ∥ρ − ρn∥L1((0,T )×R) −→ 0. This proves the third point of Proposition 3.1. +Eventually, let ρ1, ρ2 ∈ B1, λ ∈ [0, 1]; it is readily checked that λρ1 + (1 − λ)ρ2 still satisfies (3.4). Then B1 +is convex. It is also readily checked that we can pass to the L1((0, T ) × R) limit in (3.4), proving that B1 is +closed. By construction B1 is bounded. That ends the proof of Proposition 3.1. +3.2. The general cost case evaluated for a subjective density. In the same setup (1.4), let’s +further prospect the situation for a cost function c verifying (1.5). Most of the items of Proposition 3.1 hold +with the set B1. The first point is independent of the nature of c. The third point proof still holds with +general cost if the second point holds. Proof of existence and unicity of ξ ∈ L∞((0, T )) is still valid. In fact, +the main issue lies in proving that ξ is Lipschitz for any ρ in a given set B. +In order to explore this issue, let’s start from Lemma 3.3 estimate (3.5): +2 |ξ(t) − ξ(s)| ≤ +����� +� +¯ +ξ +−1 +c(ρ(t, x)) − c(ρ(s, x)) dx − +� 1 +¯ξ +c(ρ(t, x)) − c(ρ(s, x)) dx +����� +Recall that c satisfies (1.5). We set ¯α := esssupu∈[0,1] c′(u), ¯α := essinfu∈[0,1] c′(u) > 0. Using the negative +and positive parts of (ρ(t, ·) − ρ(s, ·)), rearranging the terms we get the following estimate: +2 |ξ(t) − ξ(s)| ≤ +� ¯α + ¯α +2 +� ����� +� +¯ +ξ +−1 +ρ(t, x) − ρ(s, x) dx − +� 1 +¯ξ +ρ(t, x) − ρ(s, x) dx +����� ++ +� ¯α − ¯α +2 +� � 1 +−1 +|ρ(t, x) − ρ(s, x)| dx =: I1 + I2 +(3.6) +The first term I1 of the right member is controlled by the estimate of Lemma 3.2. The issue lies in controlling +the second term I2. This suggests that, in order to prove that ξ ∈ W 1,∞((0, T )) we need an estimate of +the modulus of continuity of ρ as an element of C0([0, T ], L1(R)). While the standard Oleinik regularizing +effect can be used locally away from the turning curve (see [5]), in a vicinity of the turning curve the spatial +variation of ρ may not be controlled; moreover, (ir)regularity of the turning curve itself impacts the modulus +of continuity of ρ, making it an open question how to control time variations of ρ. We leave this issue for +future research. +However, we can treat a natural modification of problem (1.4) for which the method applied for the affine +cost (3.3) extends to general costs. Let R : L1((−∞, T )) −→ L1((0, T )) be the operator defined by: +(3.7) +R[ρ(·, x)](t) := δ +� t +−∞ +ρ(s, x)e−δ(t−s) ds +To make this operator well defined, we extend ρ by ρ(t) = ρ0 for any t ∈ [−∞, 0]. This model corresponds to +a memory effect in individual’s perception of the density; R[ρ] is a subjective density perceived by an agent + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +13 +making decision to move towards the most appropriate exit. Thus, we consider the problem: +(1.4a) +(1.4b’) + + + + + +ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 +� ξ(t) +−1 +c(R[ρ(·, x)](t))dx = +� 1 +ξ(t) +c(R[ρ(·, x)](t))dx, +with c verifying (1.5), and with initial datum satisfying (1.3). +Equation (1.4b’) takes into account the average density over the recent past instead of the instantaneous +density at a time t. This models the bias, due to some inertia of human thinking, towards perception of +the density for the pedestrians in the corridor; the quantity R[ρ(·, x)] can be compared to other “subjective +densities” used in the literature (cf. [10], [8, 7]). With the same calculations as (3.6), we recover the term +I2 = +� 1 +−1 +���R[ρ(·, x)](t) − R[ρ(·, x)](s) +��� dx, +which is controlled by 2δ∥ρ∥L∞|t−s|, a bound for the modulus of continuity of R[ρ(·, x)]. For I1 we can pass +the absolute value inside the integral. Then I1 is also controlled by the modulus of continuity of R[ρ(·, x)]. +Notice that we don’t need the property (1.16) for this reasoning. Consequently, we define: +(3.9) +B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1} . +Then, Iδ : (B2, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞((0, T )), ∥ · ∥∞), ρ �→ ξ where ξ is defined by (1.4b’) with R given +by (3.7), is well defined. The analogue of Proposition 3.1 - where we use Iδ instead of I0, we use B2 instead +of B1 and we drop the assumption of affine cost - is easily justified. In particular, the proof for the third +item of this analogue of Proposition 3.1 holds with these choices. Thus, without the restriction (3.3) on the +cost, we have the following claim: +Proposition 3.4. Let ρ0 satisfy (1.3). Let c verifying (1.5). Then problem (1.6a)-(1.6b)-(1.4b’) admits at +least one solution. +3.3. The general cost case with relaxed equilibrium. We consider (1.6) with a modified equi- +librium equation (1.4b). This time, we suppose that collective behavior of pedestrians makes appear some +amount of inertia in the dynamics of ξ. Fixing ǫ > 0, we consider as a simplest variant of such dynamics the +ODE Cauchy problem +(3.10a) +(3.10b) + + + + + + + + + + + +−ǫ ˙ξ(t) = +� 1 +ξ(t) +c(ρ(t, x))dx − +� ξ(t) +−1 +c(ρ(t, x))dx +� 1 +ξ(0) +c(ρ0(x))dx − +� ξ(0) +−1 +c(ρ0(x))dx = 0. +for the ρ-driven evolution of the turning curve ξ. Formally, the case ǫ = 0+ corresponds to the standard +Hughes’s relation between the density and the turning curve; ǫ > 0 models a form of relaxation to the +equilibrium given by this standard model. The primitive form of the Hughes’ model, where the position of +the turning curve is determined by an instantaneous Hamilton-Jacobi equation, should be modified to fit +this dynamics of the turning curve; this modeling issue will be discussed elsewhere. +Proposition 3.5. Let ρ ∈ L1((0, T )×R). Let c verifying the conditions (1.5). There exists a unique solution +ξ to the Cauchy problem (3.10). Furthermore, ξ is Lipschitz and the Lipschitz constant is independent of ρ. +Proof. Let’s denote: +Ψ(t, a) := 1 +ǫ +�� 1 +a +c(ρ(t, x))dx − +� a +−1 +c(ρ(t, x))dx +� +. +Notice that for any a, b ∈ [−1, 1], t ∈ R, +|Ψ(t, a) − Ψ(t, b)| ≤ 1 +ǫ +����� +� b +a +2c(ρ(t, x)) dx +����� ≤ 2∥c∥∞ +ǫ +|a − b|. +(3.11) + +14 +B. ANDREIANOV, T. GIRARD +We also have, for any ξ such that ∥ξ∥∞ ≤ 1: +|Ψ(t, ξ(t))| ≤ 1 +ǫ +���� +� 1 +−1 +sign(x − ξ(t))c(ρ(t, x)) dx +���� ≤ 2∥c∥∞ +ǫ +So Ψ is Lipschitz with respect to the a variable and uniformly bounded with respect to the t variable. We +apply the Cauchy-Lipschitz Theorem and recover that there exists a unique local solution to the Cauchy +problem (3.10). Using (3.11), we recover that the solution is global on [0, T ] and that ξ is Lipschitz; moreover, +the Lipschitz constant of ξ does not depend on ρ. +Remark 3.6. From Proposition 3.5, it follows that +�Iǫ : L1((0, T ) × R, [0, 1]) −→ W 1,∞((0, T )) +that maps any to ρ to the unique ξ solution to (3.10) is well defined. +Proposition 3.7. Let ρ1, ρ2 ∈ L1((0, T ) × R). Let’s denote ξ1,2 := �Iǫ(ρ1,2). Then, +(3.12) +∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ +ǫ +exp +�2T ∥c∥∞ +ǫ +� +∥ρ1 − ρ2∥L1((0,T )×(−1,1)) +Proof. We denote ξ0 the unique solution to (3.10b). Then, for any t ∈ [0, T ]: +ξ1,2 = ξ0 − +� t +0 +Ψ1,2(s, ξ1,2(s)) ds +Then, writing ∨, ∧ for min, max, repsectively, we make the following calculations: +ξ2(t) − ξ1(t) += +� t +0 +Ψ1(s, ξ1(s)) − Ψ2(s, ξ2(s)) ds += 1 +ǫ +� t +0 +�� ξ1(s) +−1 +c(ρ1(s, x)) dx − +� 1 +ξ1(s) +c(ρ1(s, x)) dx − +� ξ2(s) +−1 +c(ρ2(s, x)) dx + +� 1 +ξ2(s) +c(ρ2(s, x)) dx +� +ds += 1 +ǫ +� t +0 +�� (ξ1∨ξ2)(s) +−1 +c(ρ1(s, x)) − c(ρ2(s, x)) dx ± +� (ξ1∧ξ2)(s) +(ξ1∨ξ2)(s) +c(ρ1(s, x)) + c(ρ2(s, x)) dx ++ +� 1 +(ξ1∧ξ2)(s) +c(ρ2(s, x)) − c(ρ1(s, x)) dx +� +ds +And consequently, +|ξ1(t) − ξ2(t)| ≤ 1 +ǫ +� t +0 +� (ξ1∧ξ2)(s) +(ξ1∨ξ2)(s) +c(ρ1(s, x)) + c(ρ2(s, x)) dx ds ++ 1 +ǫ +� t +0 +� 1 +−1 +|c(ρ1(s, x)) − c(ρ2(s, x))| ds dx =: J1 + J2. +For the term J2 we can use the Lagrange inequality denoting ∥c′∥∞ := supp∈[0,1] |c′(p)|. We get: +J2 ≤ ∥c′∥∞ +ǫ +∥ρ1 − ρ2∥L1((0,T )×(−1,1)). +For the the term J1, notice that, thanks to the cost conditions (1.5), for any s ∈ [0, t], +2|ξ1(s) − ξ2(s)| ≤ +� (ξ1∧ξ2)(s) +(ξ1∨ξ2)(s) +c(ρ1(s, x)) + c(ρ2(s, x)) dx ≤ 2∥c∥∞|ξ1(s) − ξ2(s)| + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +15 +Consequently for any s ∈ [0, T ], there exists β(s) ∈ [2 , 2 ∥c∥∞] such that +� (ξ1∧ξ2)(s) +(ξ1∨ξ2)(s) +c(ρ1(s, x)) + c(ρ2(s, x)) dx = β(s)|ξ1(s) − ξ2(s)|. +Then β ∈ L∞((0, T )) ⊂ L1((0, T )). We are now in a position to use Gronwall’s inequality with integrable +coefficients. That inequality still holds without the continuity of β if we use the Lebesgue differentiation +Theorem. We thus reach to +|ξ1(t) − ξ2(t)| ≤ +� t +0 +β(s) +ǫ +|ξ1(s) − ξ2(s)| ds + ∥c′∥∞ +ǫ +∥ρ1 − ρ2∥L1 +which yields the subsequent estimates +|ξ1(t) − ξ2(t)| ≤ ∥c′∥∞ +ǫ +∥ρ1 − ρ2∥L1 exp +�� t +0 +β(s) +ǫ +ds +� +, +∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ +ǫ +exp +�2T ∥c∥∞ +ǫ +� +∥ρ1 − ρ2∥L1 +Remark 3.8. One can check that, in the relaxed equilibrium setting, we never used any property of ρ apart +from the universal bounds 0 ≤ ρ ≤ 1. Consequently, in this case we also use: +(3.9) +B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.t. 0 ≤ ρ ≤ 1} +Here’s the final result in this relaxed equilibrium setting: +Proposition 3.9. Let ρ0 satisfy (1.3). Let c verifying (1.5). Then problem (1.6a)-(1.6b)-(3.10) admits at +least one solution. +Proof. We only have to apply Corollary 1.9 with B2 as a B set and check that, using Propositions 3.5 and +3.7, all the assumptions on �Iǫ are satisfied. +4. Hughes’ model with constrained evacuation at exit. In this section, we illustrate the robust- +ness of our approach by modifying the Hughes model at the level of boundary conditions for the density, +allowing for the realistic feature of capacity drop (see [8, 7] and references therein). We consider the following +dynamics for ρ introduced in [8] on the basis of the theory of [11, 3]: +(4.1a) +(4.1b) +(4.1c) +(4.1d) + + + + + + + + + + + + + + + + + + + +ρt+ [sign(x − ξ(t))f(ρ)]x = 0 +f(ρ(t, 1)) ≤ g +�� 1 +σ +w1(x)ρ(t, x) dx +� +f(ρ(t, −1)) ≤ g +�� −σ +−1 +w−1(x)ρ(t, x) dx +� +ρ(0, ·) = ρ0(·). +The equations (4.1b)-(4.1c) prescribe the behaviour at exits situated at x = ±1; as in previous sections, +we set up the conservation law for ρ in the whole space, but the initial condition (1.3) is confined to the +domain of interest (−1, 1). The flux f(ρ) of pedestrian going through the exits is limited by respective +constraints (we take a common nonlinearity g for the sake of conciseness, but it is straightforward to extend +the setting distinguishing g1 and g−1). This flux limiter g depends non locally of ρ(t, ·) and of a weight w +supported in a vicinity of length 1 − σ around the exits. This type of constraint models the well-known +phenomenon of capacity drop which, in extreme situations, corresponds to a panic behaviour at exits located +at x = ±1, as discussed in [8] and [7]. This model, allowing to consider constrained evacuation at exits, is +phenomenologically more relevant than the model with open-end condition considered above (and it includes +the previous model, for the trivial choice g ≡ max[0,1] f, see Remark 4.3). As an example, this constrained +evacuation model is able to reproduce the “Faster is Slower” effect at exits (see [7]). +In the following, we’ll use the results of [7] and adapt them to our framework. We use the notations proposed +in this paper: + +16 +B. ANDREIANOV, T. GIRARD +• Since f is concave positive such that f(0) = f(1), there exists a ¯ρ ∈ [0, 1] such that f ′(ρ)(¯ρ − ρ) > 0 +for a.e. ρ ∈ [0, 1]. +• We fix σ ∈ (0, 1). This is the threshold of influence on the exit, meaning that the pedestrian located +before x = σ have no influence on the exit congestion at x = 1. +Let us take the strongest assumptions used in [8, 7]: +� +w1 ∈ W 1,∞((σ, 1], R+) s.t. +� 1 +σ w1 = 1 +w−1 ∈ W 1,∞([−1, −σ), R+) s.t. +� −σ +−1 w−1 = 1 +(4.2) +g ∈ W 1,∞(R+, (0, f(¯ρ)]) is non-increasing. +(4.3) +We can now introduce the notion of solution we’ll use for ρ combining the one in [11] and Definition 1.1: +Definition 4.1. Let ξ ∈ W 1,∞((0, T ), (−1, 1)). Let ρ0 ∈ L1(R, [0, 1]) supported in [−1, 1]. Let f be a con- +cave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ). Let g, ω−1 and ω1 satisfy +(4.2)-(4.3). +We say that ρ ∈ L1((0, T ) × R) is an admissible solution to (4.1) if: +for all φ ∈ C∞ +c ((0, T ) × R), +(4.4) +�� +(0,T )×R +ρφt + F(t, x, ρ)φx dt dx = 0, +moreover, setting +Q−1(t) := g +�� −σ +−1 +w−1(x)ρ(t, x) dx +� +, Q1(t) := g +�� 1 +σ +w1(x)ρ(t, x) dx +� +, +(4.5) +there holds: +• For all positive φ ∈ C∞ − c({x > ξ(t)}), for all k ∈ R, +− +�� +(0,T )×R +|ρ − k| φt + q(ρ, k)φx dt dx − 2 +� T +0 +� +1 − Q1(t) +f(¯ρ) +� +f(k)φ(t, 1) dx − +� +R +|ρ0 − k|φ(0, x) dx ≤ 0. +(4.6) +• For all positive φ ∈ C∞ +c ({x < ξ(t)}), for all k ∈ R, +− +�� +(0,T )×R +|ρ − k| φt + q(ρ, k)φx dt dx − 2 +� T +0 +� +1 − Q−1(t) +f(¯ρ) +� +(−f(k)) φ(t, −1) dx − +� +R +|ρ0 − k|φ(0, x) dx ≤ 0. +(4.7) +• For all positive φ ∈ C∞ supported on [a, b] such that a < −1, 1 < b we have: +� T +0 +� −1 +a +ρφt + F(t, x, ρ)φx dt dx ≤ +� T +0 +Q−1(t)φ(t, −1) dt +(4.8a) +� T +0 +� b +1 +ρφt + F(t, x, ρ)φx dt dx ≤ +� T +0 +Q1(t)φ(t, 1) dt +(4.8b) +Remark 4.2. As detailled in [3], equations (4.8) combined with the weak solution property (4.4) imply that +for a.e. t ≥ 0, f(γ1 +L,Rρ(t)) ≤ Q1(t) and −f(γ−1 +L,Rρ(t)) ≥ −Q−1(t). This corresponds to the expected limited +flux condition. +Remark 4.3. One can notice that if for all t ≥ 0, g(t) = f(¯ρ) then the flux is not limited at exits and +1 − Q1(t) +f(¯ρ) = 1 − Q−1(t) +f(¯ρ) += 0. Then, this definition is exactly Definition 1.1. +We have the following results: + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +17 +Proposition 4.4. Let ρ0 verify (1.3). Let ξ ∈ W 1,∞((0, T ), (−1, 1)). There exists a solution to (4.1) in the +sense of Definition 4.1. +The proof of Proposition 4.4 is postponed to the Appendix. It is obtained via a convegent finite volume +scheme. The details of the scheme and the proof of convergence can be found there. +Using the results from [11], [7], [8] and a partitionning argument we prove a corollary of Theorem 1.8: +Corollary 4.5. Let ρ0 verify (1.3). Let ξ ∈ W 1,∞((0, T ), (−1, 1)). There exists at most one solution ρ of +(4.1) in the sense of Definition 4.1. Using Proposition 4.4, the solver operator +Sg : (W 1,∞((0, T ), (−1, 1)), ∥ · ∥∞) −→ (L1((0, T ) × (−1, 1)), ∥ · ∥L1), +that maps any ξ to the unique solution ρ to (4.1) is well defined and continuous. +Proof of Corollary 4.5. We use of the classical embedding of W 1,∞( [0, T ], (−1, 1)) into C0([0, T ], (−1, 1)): +there exists K a closed segment of (−1, 1) such that ξ ∈ C0([0, T ], K). We consider (φi)i∈{−1,0,1} a partition +of the unity of an open set containing [−1, 1] such that: +All the supports are segments and 1 ∈ supp(φ1), −1 ∈ supp(φ−1) and K ⊂ supp(φ0) ⊂ (−1, 1) +[supp(φ−1) ∪ supp(φ1)] +� +K = ∅ +Let ρ, ˆρ be two solutions in the sense of Definition 4.1. We denote ˆQ1,−1 the constraints associated with ˆρ. +Let Ψ ∈ C∞ +c ((0, T )×R). We use the classic Kruzkhov doubling of variables (cf. [14]) in the open subdomains +of (0, T ) × R situated between x = −∞ and x = −1, x = −1 and x = ξ(t), x = ξ(t) and x = 1, and finally +between x = 1 and x = +∞. Then by a limiting procedure analogous to the one employed in the proof +of Theorem 2.1, we obtain the Kato inequality carrying singular terms concentrated on the three curves +{x = ξ(t)}, {x = 1} and {x = −1}: +− +�� +(0,T )×(−1,1) +|ρ − ˆρ|φt + q(ρ, ˆρ)φx +≤ +� T +0 +Ψ(t, ξ(t)) (φ0 + φ−1 + φ1) (t, ξ(t)) +� +q0 +R(γRρ, γRˆρ) − q0 +L(γLρ, γL ˆρ) +� +(4.9a) ++ +� T +0 +Ψ(t, 1)φ1(t, 1) +� +q1(γRρ, γRˆρ) − q1(γLρ, γLˆρ) +� +(4.9b) ++ +� T +0 +Ψ(t, −1)φ−1(t, −1) +� +q−1(γRρ, γRˆρ) − q−1(γLρ, γLˆρ) +� +, +(4.9c) +where the left and right traces are taken along their respective curves, and +q0 +L,R(ρ, ˆρ) := sign(ρ − ˆρ) +� +fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t) (ρ − ˆρ) +� +q1(ρ, ˆρ) := sign(ρ − ˆρ) [fR(ρ) − fR(ˆρ)] +q−1(ρ, ˆρ) := sign(ρ − ˆρ) [fL(ρ) − fL(ˆρ)] . +Referring to proof of Theorem 2.1, the integral (4.9a) is zero. Using the same argument as the proof of +Proposition 2.10 in [3], we get: +(4.9b) ≤ 2 +� T +0 +Ψ(t, 1) +���Q1(t) − ˆQ1(t) +��� dt +(4.9c) ≤ 2 +� T +0 +Ψ(t, −1) +���Q−1(t) − ˆQ−1(t) +��� dt +As in the proof of Theorem 2.1, we integrate (4.9) along a trapezoid T 0,t +a,b . Then we use the definition of +Q±1, ˆQ±1 with Lg the Lipschitz constant of g to get the following inequality: +∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft)) + 2 +� t +0 +� 1 +−1 +Lg +� +1(−1,−σ)ω−1 + 1(σ,1)ω1 +� +|ρ − ˆρ| dx ds. + +18 +B. ANDREIANOV, T. GIRARD +Eventually, using Holder’s inequality and Gronwall’s Lemma, we get: +(4.10) +∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft))eCt, +where C := 2Lg∥1(−1,−σ)ω−1 + 1(σ,1)ω1∥∞. Consequently, there is at most one solution in the sense of +Definition 4.1 associated to a fixed ξ turning curve and an initial datum ρ0. +In order to recover the continuity of the operator Sg we proceed the same way as we proved Proposition +1.8. We first cover any compact set contained in {ξ(t) < x < 1} by trapezoids. Without loss of generality, +we can suppose those trapezoids are at distance at least ǫ of the both interfaces {x = ξ(t)} and {x = 1}. +Consequently, on any trapezoid, for all n ≥ n0, ρn is a Kruzhkov entropy solution. We recover compacity +thanks to the averaging compactness lemma. This reasoning can be reproduced in the three other parts +of the domain: {x < −1}, {−1 < x < ξ(t)} and {x > 1}. Then, we can pass to the limit via dominated +convergence in equation (4.4) and in all the inequalities (4.6)-(4.7)-(4.8). We conclude the proof with the +same classical arguments as the proof of Proposition 1.8. That ends the proof of Corollary 4.5. +We are ready to state the main result of this section which is an analog of Theorem 1.9. +Theorem 4.6. Let ρ0 verify (1.3). Assume that f verifies (1.13). Let g (resp. ω1,−1) satisfy (4.3) (resp. +(4.2)). Let B a convex closed bounded subset of L1((0, T ) × R) and +I : (B, ∥ · ∥L1((0,T )×R)) −→ +(C0([0, T ], R), ∥ · ∥∞) +be a continuous operator such that ∀ρ ∈ B, ∀t ∈ [0, T ], I[ρ](t) ∈ (−1, 1). If there exists r > 0 such that +(1.14a)-(1.14b) hold, then there exists (ρ, ξ) a solution to the problem (4.1)-(1.6b)-(1.6c). Here ρ is a solution +in the sense of Definition 4.1. In particular, existence is verified for I = I0 (for affine cost) or with I = Iδ +or �Iǫ (for general cost verifying (1.5)). +Appendix A. Convergence of the finite volume scheme in the constrained case. In order to prove +existence of a solution to (4.1) in the sense of Definition 4.1, we construct a converging finite volume scheme +adapted around the fixed turning curve ξ. At the exits we use an operator splitting method with a scheme +for the constraints Q1 and Q−1 as in [7]. +We now present the scheme used in this setting. Let T, J ∈ N such that: +(CFL) +2 +� +∥f ′∥∞ + ∥ ˙ξ∥∞ +� J +T ≤ 1. +We construct the following scheme: +∆t = 1 +T , +tn := n∆t, +(A.1a) +∆x = 1 +J , +xj = j∆x, +(A.1b) +sn := 1 +∆t +� tn+1 +tn +˙ξ(s) ds, +s∆(t) := +N +� +1 +1[tn,tn+1)(t)sn, +(A.1c) +ξ∆(t) := ξ(0) + +� t +0 +s∆(s) ds, +ξn = ξ∆(tn). +(A.1d) +The discretization (A.1c)-(A.1d) of the ξ interface is detailled in [22] Section 3.1 where it is required to +construct the adapted mesh. +For any n, we denote jn the unique element of �−J, J� such that ξn ∈ +[xjn, xjn+1). We construct the following mesh: +χn +j := + + + +xj if j ≤ jn − 1 +yn if j = jn +xj if j ≥ jn + 1 +Pn +j+1/2 := + + + + + + + + + + + + + + + + + + + +(χn +j , χn +j+1) × (tn, tn+1) +if j ≤ jn − 2 +the trapezoid χn +jn−1 χn+1 +jn−1 χn+1 +jn+1 χn +jn +if j = jn − 1 +the trapezoid χn +jn χn+1 +jn+1 χn+1 +jn+2 χn +jn+2 +if j = jn +(χn +j+1, χn +j+2) × (tn, tn+1) +if j ≥ jn + 1 +(A.1e) + +AN EXISTENCE RESULT FOR HUGHES’ MODEL +19 +Notice that, thanks to the (CFL) condition, xjn−1 < ξn+1 < xjn+2 so the trapezoids defined above are never +reduced to a triangle. We denote Pn +j+1/2 (resp. Pn +j+1/2) the bottom (resp. top) segment of the tapezoid +Pn +j+1/2. However, now that the mesh is modified we have two different partitions for the line t = tn+1: +(Pn+1 +j+1/2)j∈Z and (Pn +j+1/2)j∈Z. We define (¯ρn+1 +i+1/2)i∈Z corresponding to the values of ρn+1 on (Pn +i+1/2)i∈Z and +(ρn +j+1/2)j∈Z the projection of this values on (Pn +j+1/2)j∈Z. +¯ρn+1 +j+1/2 = +ρn +j+1/2 +����Pn +j+1/2 +���� − ∆t(f n +j+1 − f n +j ) +����Pn +j+1/2 +���� +(A.1f) +ρn+1 +j+1/2 := +1 +����Pn+1 +j+1/2 +���� +� +i∈Z +����Pn+1 +j+1/2 +� +Pn +i+1/2 +���� ¯ρn+1 +i+1/2 +(A.1g) +ρ∆(t, x) := +N +� +n=0 +� +j ∈ Z +j ̸= jn ± 1 +ρn +j+1/2 1Pn +j+1/2(t, x) +(A.1h) +We now want to define the numerical fluxes (f n +j )j∈Z corresponding to the left and right edges of the trape- +zoids. It is worth noticing that we skipped f n +jn+1 when we constructed the mesh. We first define the non-local +constraint approximation. +ρn +∆x(·) = +� +j∈Z +ρn +j+1/21[χn +j ,χn +j+1)(·) +(A.1i) +qn +1 := g1 +�� 1 +σ +ρn +∆x(x)ω1(x) dx +� +(A.1j) +qn +−1 := g−1 +�� −σ +−1 +ρn +∆x(x)ω−1(x) dx +� +(A.1k) +F(ρn +j−1/2, ρn +j+1/2) = + + + + + + + + + + + + + + + +min +� +Godf(ρn +j−1/2, ρn +j+1/2) , qn +1 +� +if j − 1 = J +max +� +God−f(ρn +j−1/2, ρn +j+1/2) , −qn +−1 +� +if j = −J +Fn +int(ρn +j−1/2, ρn +j+1/2) +if j = jn +Godf(ρn +j−1/2, ρn +j+1/2) +if j > jn and j − 1 ̸= J +God−f(ρn +j−1/2, ρn +j+1/2) +if j < jn and j ̸= −J. +(A.1l) +Eventually, we define Fn +int as in [6] (see details in Subsections 2.5, 3.3 and 5.1): +f n +L,R(ρ) := ±f(ρ) − snρ +∀(ρL, ρR) ∈ [0, 1]2, ∃k ∈ [0, 1] s.t. Godf n +L(ρL, k) = Godf n +R(k, ρR) +Fn +int(ρn +j−1/2, ρn +j+1/2) := Godf n +L(ρn +j−1/2, k) = Godf n +R(k, ρn +j+1/2) +(A.1m) +Numerical simulations with for this scheme can be found in [6, Sect. 5.1] for the case of open-end condition +at exits. +We are now in a position to start the proof of convergence, which merely assembles with the help of the +partition-of-unity technique of [22, 6] the arguments from [6] (for the inner interface situated at x = ξ(t) and +[7] (for the constraints set at x = ±1). +Proof of Proposition 4.4. The proof follows the general idea of [22, Sect. 4], see also [6]. Since the interfaces +{x = −1}, {x = ξ(t)} and {x = 1} are non-intersecting, we isolate them in the supports of a partition of +unity φ−1, φ0 and φ1. We fix a test function φ. Taking (the discretization of) the test function φ0φ we can + +20 +B. ANDREIANOV, T. GIRARD +use the specific result for the Hughes’ model treated in [6, Sect. 5.1] to recover the approximate entropy +inequalities satisfied by the discrete solution, with the test function φ0φ. For test functions φ−1φ and φ1φ, +we use in the same way the result of [7, Prop. 3.1]. Summing up the contributions of the three parts of the +partition of unity, we obtain approximate entropy inequality for the discrete solution, with arbitrary test +function φ. In addition, the integral weak formulation for the approximate solution follows from the scheme’s +conservativity. We use the same compactness argument as in [22, Sect. 3.4]. We can pass to the limit in +the approximate weak formulation and in the approximate entropy inequalities, for the chosen converging +subsequence and arbitrary test function. This allows us to characterize the limit as an entropy solution in +the sense of Definition 4.1 of the problem at hand. Finally, thanks to the uniqueness proven in Theorem 4.5, +the whole sequence of discrete solutions converges to the unique solution in the sense of Definition 4.1. +Acknowledgments. This paper has been supported by the RUDN University Strategic Academic Leader- +ship Program. +REFERENCES +[1] D. Amadori and M. Di Francesco, The one-dimensional hughes model for pedestrian flow: Riemann—type solutions, +Acta Math. Sci. Ser. B Engl. Ed., 32 (2012), pp. 259–280. +[2] D. Amadori, P. Goatin, and M. D. Rosini, Existence results for hughes’ model for pedestrian flows, J. Math. Anal. +Appl., 420 (2014), pp. 387–406. +[3] B. Andreianov, P. Goatin, and N. Seguin, Finite volume schemes for locally constrained conservation laws, Numer. +Math. (Heidelb.), 115 (2010), pp. 609–645. +[4] B. Andreianov, K. H. Karlsen, and N. H. Risebro, A theory of L1-dissipative solvers for scalarconservation laws +with discontinuous flux, Arch. Ration. Mech. Anal., 201 (2011), pp. 27–86. +[5] B. Andreianov, M. D. Rosini, and G. Stivaletta, On existence, stability and many-particle approximation of +solutions of 1D Hughes model with linear costs. working paper or preprint, July 2021. +[6] B. Andreianov and A. Sylla, Finite volume approximation and well-posedness of conservation laws with moving +interfaces under abstract coupling conditions. submitted, 2022. +[7] B. P. Andreianov, C. Donadello, U. Razafison, and M. D. Rosini, Qualitative behaviour and numerical approxi- +mation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics +at the bottlenecks, Mathematical Modelling and Numerical Analysis, 50 (2015), pp. 1269–1287. +[8] B. P. Andreianov, C. Donadello, and M. D. Rosini, Crowd dynamics and conservation laws with nonlocal constraints +and capacity drop, Mathematical Models and Methods in Applied Sciences, 24 (2014), pp. 2685–2722. +[9] C. Cancès and T. Gallouët, On the time continuity of entropy solutions, J. Evol. Equ., 11 (2011), pp. 43–55. +[10] J. A. Carrillo, S. Martin, and M.-T. Wolfram, An improved version of the hughes model for pedestrian flow, +Mathematical Models and Methods in Applied Sciences, 26 (2016), pp. 671–697. +[11] R. M. Colombo and P. Goatin, A well posed conservation law with a variable unilateral constraint, J. Differ. Equ., +234 (2007), pp. 654–675. +[12] M. Di Francesco, P. A. Markowich, J.-F. Pietschmann, and M.-T. Wolfram, On the hughes’ model for pedes- +trian flow: The one-dimensional case, J. Differ. Equ., 250 (2011), pp. 1334–1362. +[13] N. El-Khatib, P. Goatin, and M. D. Rosini, On entropy weak solutions of hughes model for pedestrian motion, +Zeitschrift für angewandte Mathematik und Physik, 64 (2013), pp. 223–251. +[14] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, American Mathematical Society, Provi- +dence, RI, May 1998. +[15] P. Goatin and M. Mimault, The wave-front tracking algorithm for hughes’ model of pedestrian motion, SIAM J. Sci. +Comput., 35 (2013), pp. B606–B622. +[16] D. A. Gomes and R. M. Velho, On the hughes model and numerical aspects, (2016). +[17] R. L. Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B-methodological, 36 +(2002), pp. 507–535. +[18] M. J. Lighthill and G. B. Whitham, On kinematic waves. ii. a theory of traffic flow on long crowded roads, Proceedings +of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 229 (1955), pp. 317–345. +[19] B. Perthame, Kinetic formulation of conservation laws, Oxford Lecture Series in Mathematics and its Applications, +Clarendon Press, Oxford, England, Jan. 2003. +[20] P. I. Richards, Shock waves on the highway, Operations research, 4 (1956), pp. 42–51. +[21] A. Sylla, Influence of a slow moving vehicle on traffic: Well-posedness and approximation for a mildly nonlocal model, +Networks and Heterogeneous Media, 16 (2021). +[22] A. Sylla, A lwr model with constraints at moving interfaces, ESAIM: Mathematical Modelling and Numerical Analysis, +56 (2022). +[23] M. Twarogowska, P. Goatin, and R. Duvigneau, Numerical study of macroscopic pedestrian flow models, (2013). +[24] A. Vasseur, Strong traces for solutions of multidimensional scalar conservation laws, Arch. Ration. Mech. Anal., 160 +(2001), pp. 181–193. +[25] E. Zeidler, Applied functional analysis, Applied mathematical sciences, Springer, New York, NY, 1995 ed., Dec. 2012. + diff --git a/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..298634de4513c4a376f0f143db6d979087c1ff9b --- /dev/null +++ b/uNE5T4oBgHgl3EQfLQ6L/content/tmp_files/load_file.txt @@ -0,0 +1,1217 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf,len=1216 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='05472v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='AP] 13 Jan 2023 EXISTENCE OF SOLUTIONS TO A CLASS OF ONE-DIMENSIONAL MODELS FOR PEDESTRIAN EVACUATIONS∗ BORIS ANDREIANOV† AND THEO GIRARD ‡ Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In the framework inspired by R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Hughes model (Transp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' B, 2002) for pedestrian evacuation in a corridor, we establish existence of a solution by a topological fixed point argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This argument applies to a class of models where the dynamics of the pedestrian density ρ (governed by a discontinuous-flux Lighthill,Whitham and Richards model ρt + (sign(x − ξ(t))ρv(ρ))x = 0 ) is coupled via an abstract operator to the computation of a Lipschitz continuous “turning curve” ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We illustrate this construction by several examples, including the standard Hughes’ model with affine cost, and either with open-end conditions or with conditions corresponding to panic behaviour with capacity drop at exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Other examples put forward versions of the Hughes model with inertial dynamics of the turning curve and general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Key words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' crowd dynamics, pedestrian evacuation, Hughes’ model, capacity drop, existence, Schauder fixed-point, admissible solution, discontinuous-flux conservation law, memory, relaxation MSC codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 35L65, 47H10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The Hughes model and its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The Lighthill,Whitham and Richards (LWR) model for traffic introduced in [18] and in [20] consists in a conservation law for the vehicule density ρ with a concave positive flux ρv(ρ): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) �ρt + [ρv(ρ)]x = 0 ρ(t = 0, x) = ρ0(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Here, we can suppose that the density ρ takes its values in [0, 1] and v stands for the speed of the traffic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This model can be seen as the mass conservation equation where velocity v depends only on the traffic density ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' One frequently chooses v(ρ) = 1 − ρ up to a multiplicative constant representing the maximal velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This describes a transport of the initial density of agents ρ0 at t = 0 towards x = +∞ where the speed is decreasing when the density of agents is increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, in [17], Hughes proposed a model of pedestrian evacuation as a system of two equations on ρ and φ which is known as Hughes’ model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In the multi-dimensional model, ρ is the density of pedestrians with respect to time t and space x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The dynamics of ρ is governed by LWR conservation laws with direction field oriented towards the exits of a bounded domain Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to prescribe the direction towards the exit preferred by a pedestrian at location x at a time t, Hughes defines φ(t, x), the “potential field” satisfying an eikonal equation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The potential φ is zero on the exits located on ∂Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' A pedestrian would then choose to “descend the gradient” of this potential in order to leave the domain Ω by these exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Theory of the Hughes’ model is yet incomplete, even in one space dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In the 1D case, the model of [17] takes the form: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2b) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2c) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2d) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ρt + [sign(−∂xφ)ρv(ρ)]x = 0 ρ(t, x = ±1) = 0 |∂xφ| = 1 v(ρ) φ(t, x = ±1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2) is set up in a corridor with two exits;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' upon renormalization, we assumed that Ω = (−1, 1) and that the exits are located at x = ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' At t = 0 the pedestrians are distributed with a given density ρ0 defined in [−1, 1] and at t > 0, the pedestrians want to leave the corridor by either one of the exits (as if a ∗Submitted to the editors DATE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' †Institut Denis Poisson CNRS UMR 7013, Université de Tours, Université d’Orléans, Parc Grandmont, 37200 Tours, France and Peoples’ Friendship University of Russia (RUDN University) 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation (Boris.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='Andreianov@lmpt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='univ-tours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='fr, https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='idpoisson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='fr/andreianov/).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ‡Institut Denis Poisson, Université de Tours, Parc Grandmont, 37200 Tours, France (theo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='girard@lmpt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='univ-tours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='fr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1 2 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD fire alarm starts ringing at t = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The pedestrians move forward (with the positive flux ρ �→ +ρv(ρ)) or backward (with ρ �→ −ρv(ρ) ) depending of the sign of ∂xφ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This results in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2a) being a discontinuous flux LWR conservation law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The sign of ∂xφ is prescribed by the eikonal equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2c) where c(ρ) = 1 v(ρ) is a cost function that is high where the crowd is slow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, the pedestrians tend to avoid those “congested” regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The Dirichlet boundary condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2b) on the density ρ is understood in the Bardos-LeRoux-Nédélec sense standard for scalar conservation laws;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' it is shown in [5, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3] that upon extending ρ0 by the value zero on R\\[−1, 1], one can replace the initial-boundary value problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2b) with ρ0 : (−1, 1) −→ [0, 1] by the pure intitial-value problem for (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2a) with the extended datum ρ0 : R −→ [0, 1] (the extension means that ρ0, now defined on R, is supported in [−1, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We adopt this viewpoint and require, throughout the paper, (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) ρ0 ∈ L∞(R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [0, 1]), ρ(x) = 0 for x /∈ [−1, 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' note that being compactly supported, ρ0 ∈ L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assumption (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) for the conservation law (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2a) set up in the whole space can be seen as “open-end condition” at exits;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' we refer to Section 4 for models with more involved exit behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In [13], the 1D Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2) has been reformulated in terms of a “turning curve” ξ(t) instead of the potential φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Following the turning curve approach, our prototype model in the sequel will be: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(ρ(t, x)) dx = � 1 ξ(t) c(ρ(t, x)) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' with ρ defined for t ∈ [0, T ], T > 0, and x ∈ R and with initial datum of the form (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Here c denotes a generic cost function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' It is proven in [13] that we can equivalently consider either the Hughes’ model potential equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2c)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2d) or the reformulated problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) with the cost function c(ρ) = 1 v(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' However, here, we will consider a cost verifying the following conditions: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5) \uf8f1 \uf8f2 \uf8f3 c ∈ W 1,∞([0, 1]), ∀ρ ∈ [0, 1], c(ρ) ≥ 1, c is increasing on [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4), ρ is considered to be an entropy solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Such notion of solution with a particular attention to the admissibility of the jump of ρ across the turning curve x = ξ(t) was proposed in [13] (we will slightly simplify this solution notion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' On the other hand, ξ is a pointwise defined solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) whose existence in L∞ and uniqueness follows from the intermediate values theorem under the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this paper, we will consider a class of “turning curve” model’s generalisations, keeping in mind the fact that, even in the setting (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4), little is known about the well-posedness of the Hughes’ model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For notation’s sake, we consider f a generic concave positive flux such that f(0) = f(1) = 0 (one can assume f(ρ) = ρv(ρ) to recover the LWR model): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6c) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))f(ρ)]x = 0 ρ(0, x) = ρ0(x) ξ = I(ρ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Here I is an abstract operator mapping the density ρ to a turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) is a particular case of (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) where I is the solver of the integral equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Stating (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b), we mean that ρ0 fulfills (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) which corresponds to open-end evacuation at exits, as stated above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let us briefly discuss known results on the specific problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) and its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In [13] uniqueness is proven for a definition of entropy solutions taking the discontinuity into account but considering ξ as being given beforehand (we will revisit this result in Section 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In [2] global existence for Hughes’ model (with c(ρ) = 1 v(ρ)) is proven if one assumes that the density at the turning curve is zero for all times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In [5], a AN EXISTENCE RESULT FOR HUGHES’ MODEL 3 uniqueness result in the same setting as this paper assuming moreover the BV regularity of the solutions is provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' And in [23], [15] and [16] one can find numerical studies of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of existence and unicity for the regularized problem can be found in [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The Hughes’ model is also revisited with different turning curve equation in [10] with numerical simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this paper, the authors introduce a regularization by convolution of the density named the subjective density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We also use the same type of idea when applying our main result in the case of a general cost function c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The only general (with respect to the choice of the initial data) existence result is contained in [5], where solutions with BVloc regularity away from the turning curve were constructed via a well-chosen many-particle approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The result of [5] for problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) is limited to the case of an affine cost c(ρ) = 1 + αρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Our result for the original setting (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) will also be limited to the affine cost case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' But we provide a shorter and less specific argument, compared to the many-particle approximation of [5], also we require fewer assumptions on the velocity profile v compared to [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The fixed-point approach we develop appears to be rather flexible since it permits to handle several models of the form (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We also adapt the arguments to more realistic, in the setting of crowd evacuation, exit behavior of the “capacity drop” kind (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [8, 7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' However, we highlight the fact that our approach is restricted to situations where Lipschitz continuity of the turning curve ξ is guaranteed for the model at hand, which appears to be a strong restriction on its applicability;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' this restriction also appears in [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Abstract framework and general results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this paper we propose an existence result elabo- rated through a fixed-point argument to problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) under abstract assumptions on I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Roughly speaking, we require that I maps any admissible solution ρ of the equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a) to a Lipschitz continuous turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Furthermore, the Lipschitz constant of those turning curves must be uniformly bounded for any ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We stress that the Hughes’ model with affine cost c(ρ) = 1 + αρ enters our abstract framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' However, it is not clear whether, for general costs satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5), the required Lipschitz bounds hold true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This issue for the original Hughes’ model is left for further investigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Models with more regular dependence of ξ on ρ can be considered as well, including memory and relaxation effects, and for these models the Lipschitz continuity of ξ is justifiable for general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' First, let’s introduce some notations that will be used throughout the whole paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote {x < ξ(t)} := {(t, x) ∈ [0, T ] × R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x < ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Analogously, we use {x = ξ(t)} and {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For any r > 0, we write BW 1,∞(0, r) := � ξ ∈ W 1,∞((0, T ), R) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ∥ ˙ξ∥∞ + ∥ξ∥∞ ≤ r � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Analogously, we write BL1(0, r) for the set of ρ ∈ L1((0, T ) × R, [0, 1]) such that ∥ρ∥L1((0,T )×R) ≤ r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6), ρ is taken as an admissible solution to the discontinuous flux LWR equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' On the way of proving the existence result, we propose and use a slightly simpler notion of admissible solution for this equation than the notion used in [13], [2] and [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Those notions of solution are equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ ∈ W 1,∞((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 ∈ L1(R, [0, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let f be a concave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We say that ρ ∈ L1((0, T ) × R, [0, 1]) is an admissible solution to: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7) �ρt + F(t, x, ρ)x = 0 ρ(t = 0, ·) = ρ0(·) if For all φ ∈ C∞ c ((0, T ) × R), (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8) �� Ω ρφt + F(t, x, ρ)φx dt dx = 0 For all positive φ ∈ C∞ c ({x < ξ(t)} (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' φ ∈ C∞ c ({x > ξ(t)}) ), for all k ∈ [0, 1], (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9) − �� Ω |ρ − k| φt + q(ρ, k)φx dt dx − � R |ρ0 − k|φ(0, x) dx ≤ 0 4 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD where we set (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10) q(u, v) := sign(u − v) [F(t, x, u) − F(t, x, v)] Note that the notion of solution makes sense for arbitrary initial datum ρ0 ∈ L1(R, [0, 1]) but in order to keep consistency with the standard Hughes’ setting, we will restrict our attention to data ρ0 that fulfill (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Note that in the above definition, no admissibility condition is prescribed at {x = ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Only the conservativity (the Rankine-Hugoniot condition following from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8)) is required at the location of the turning curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 implies that ρ ∈ C0([0, T ], L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This is proved by an adapted version of the one in [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Such an adapted proof can be found in [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remembering this fact makes sense of the notation ρ(t, ·) without ambiguity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For a given (and fixed) ξ ∈ W 1,∞((0, T )), it is shown this notion of solution gives a well-posed discontinuous flux conservation law in L1((0, T ) × R) when ρ0 belongs to L1(R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [0, 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We then define the solver operator: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11) S0 : � W 1,∞((0, T )) −→ L1((0, T ) × R) ξ �→ ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This operator S0 maps ξ a turning curve to S0(ξ) = ρ the unique admissible, in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b) set up in the whole one-dimensional space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The uniqueness of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 still holds for F(t, x, p) := 1{x<ξ(t)}fL(p) + 1{x>ξ(t)}fR(p) where fL (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' fR) is a convex negative (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' concave positive) flux such that fL(0) = fL(1) = fR(0) = fR(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' These are the core properties of the fluxes on which rely our proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For instance, modeling a slanted corridor, we can consider fL,R(ρ) := vL,R ρ(1−ρ) where vL and vR are positive constants accounting for the difference in speed for a pedestrian when moving to the right or the left exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now present the notion of solution used for the generalized Hughes’ model given by system (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Recalling Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3, it makes sense for the operator equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6c) to be verified for all t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In fact, we will require that ξ ∈ W 1,∞((0, T )) in order to obtain our main result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We then use the classical embedding result to identify ξ with a unique element of C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consider I : L1((0, T ) × R) −→ C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We say that (ρ, ξ) is a solution to generalized Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) if ρ is a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 and moreover, the equality ξ = I(ρ) holds in C0([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Notice that such a solution can be seen as a fixed point of the composed operator S0◦I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to prove the existence of a solution, we prove a variant of the Schauder’s fixed point Theorem (see [25]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' To be specific, denoting by I : ρ �→ ξ the operator that serves to compute the interface and by D : ξ �→ ρ the one that serves to compute the density, we prove the following statement: Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let (X, ∥·∥X) be a Banach space, (Y, ∥·∥Y ) a metric space and K a compact subset of Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Take D : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) a continuous operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assume there exists B a bounded closed convex subset of X such that: I : (B, ∥ · ∥X) −→ (K, ∥ · ∥Y ) is a continuous operator (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12a) D ◦ I(B) ⊂ B (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12b) Then D ◦ I admits a fixed point in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We stress that the assumption (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12a) implies that, on the subset B, I takes its values in K, making D ◦ I well-defined on B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' AN EXISTENCE RESULT FOR HUGHES’ MODEL 5 The assumptions of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6 permit us to formulate sufficient conditions for the existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Specifically, the use of the sets BW 1,∞(0, r) (as K) and C0([0, T ]) (as Y ) is the key to the application of Schauder fixed-point argument to S0 ◦ I under reachable assumptions on I in the Hughes’ model framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We prove in Section 2 the following proposition saying that S0 is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This continuity matches with the one required for the operator D in the above lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' If f satisfies the non-degeneracy condition: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='13) meas � x ∈ [−∥ρ∥∞;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' |ρ∥∞] s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' f ′(x) = 0 � = 0 then the solver operator S0 : (W 1,∞((0, T ), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Combining previous results, we state the main result of this paper: Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let B a convex closed bounded subset of L1((0, T ) × R) and I : (B, ∥ · ∥L1((0,T )×R)) −→ (C0([0, T ], R), ∥ · ∥∞) be a continuous operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assume that f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' If there exists r > 0 such that: I(B) ⊂ BW 1,∞(0, r) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a) ∀ξ ∈ BW 1,∞(0, r), the unique admissible solution to ρt + [sign(x − ξ(t))f(ρ)]x = 0 is in B (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b) then there exists (ρ, ξ) a solution to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' One can interpret B as the set where one looks for solutions to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The central point in order to use this theorem is to construct the set B;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' in below applications, two different choices for B are encountered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We search for properties of admissible solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 that are independent of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' These properties, included in the construction of B must guarantee that I(B) verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a) but also that B is convex, bounded and closed in L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this subsection, we present three applications of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' First, we consider the operator I0 associated to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) with affine cost function (further detailled in Section 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let us exhibit the construction of B1 a set satisfying the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a) for this choice of I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Notice that, thanks to the L1-contraction property of the admissible solution ρ that is justified within the uniqueness proof in Section 2, we have: ∀t ∈ [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R) ⇒ ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='15) Furthermore, we prove that for a certain fixed constant C > 0 (which value will be made precise later), for any ξ ∈ W 1,∞, a weak solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a) in the sense (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8) verifies (see Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2 and also [5]): (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='16) ∀a, b ∈ R, ∀s, t ∈ [0, T ], ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ C|t − s|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Finally, considering an inital datum 0 ≤ ρ0 ≤ 1, we set: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='17) B1 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 0 ≤ ρ ≤ 1 and ρ verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='16) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Applying Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 with B1 given by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='17) we get: Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assume that I0 : B1 −→ C0([0, T ], R) is the operator associated with equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) with affine cost c(ρ) = 1 + αρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' If f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='13), then there exists (ρ, ξ) a solution to the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 6 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD As a second case, we treat Iδ the operator associated with a modified version of equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) where ρ is replaced by an average density over recent past in equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) (see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b’)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This modification is inspired by the use of “subjective density” in pedestrian and traffic flows, proposed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', in [10] and [8, 7] (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Section 4 where subjective densities are used to model constrained evacuation at exits);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' this choice introduces inertia effect into agents’ perception of the crowd densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In that setting, we can prove that the image of Iδ is contained in a bounded subset of W 1,∞((0, T )) without requiring the property (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, we recover the global existence result for any cost c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5) with the set B2 merely given by: B2 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 0 ≤ ρ ≤ 1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' As a third example, we consider �Iǫ the operator associated with problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) with a relaxed equilibrium, modeling, in a way different from Iδ, inertia effect of the interface dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this case, the set B2 also satisfies all the conditions in order to apply Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Finally, another series of applications (which is an extension of all the previous results to models with different, phenomenologically relevant behavior of agents in exits) is provided in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Outline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In Section 2, we prove the main results of this paper, respectively Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 and Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6, Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' These proofs hold in an abstract framework where the choice of I and B are not prescribed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, in Section 3, we detail the construction involving the set B1 satisfying the assumptions of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 in the case of I0 being the operator associated with equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) with affine cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We also discuss the case of a general cost satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5) and solve it for the modified operators Iδ and �Iǫ using the set B2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Eventually, in Section 4, we extend Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 in a situation with constrained evacuation at exits x = ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of the main result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We first deduce Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6 from the Schauder fixed-point theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We recall that, thanks to condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12a), D ◦ I is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' What’s more, D and I are continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' So D ◦ I is continuous from B into itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Take any subset A of B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The set I(A) ⊂ K is a relatively compact set in (Y, ∥ · ∥Y ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Since D is continuous from (K, ∥ · ∥Y ) into (X, ∥ · ∥X), D ◦ I(A) is a relatively compact subset of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We consequently have D ◦ I a compact operator from B into itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Furthermore B is bounded closed convex subset of a Banach space X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We apply Schauder fixed-point theorem (see [25]) and conclude to the existence of a fixed point in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to apply Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6 with D = S0 the solver associated with the notion of solution of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 ( see (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11) ), we first need to check that S0 is well defined from W 1,∞((0, T )) into L1((0, T ) × R) when ∥ρ0∥L1(R) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This is equivalent to well-posedness for the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We prove below that, thanks to the particular choice of fluxes on each side of the turning curve (emphasized in Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4), Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 is restrictive enough to grant uniqueness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This notion of solution is however less restrictive than the one proposed in [13, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' It implies that both notions are equivalent, also the existence of such solutions is then directly inherited from the proof found in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Note that one can prove the existence result for our notion of solution through the convergence of a finite volume scheme (we do so in Section 4, in the context of flux-limited exit behavior at the exits x = ±1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ,ˆρ be two entropy solutions in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 with initial datum ρ0 (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ˆρ0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let Lf be the lipschitz constant of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' If ξ ∈ W 1,∞((0, T )), we have: for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' t ∈ [0, T ], ∀a, b ∈ R, � b a |ρ(t, x) − ˆρ(t, x)|dx ≤ � b+Lft a−Lft |ρ0(x) − ˆρ0(x)|dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In particular, there exists at most one entropy solution associated to a given initial datum ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to prove this Theorem, we introduce notation for the right and left strong traces of ρ along a Lipschitz curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ ∈ W 1,∞((0, T ), R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, γLρ(t) ∈ L∞((0, T )) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' γRρ(t) ) is such that, for any φ ∈ C0([0, 1]), ess lim ǫ→0+ 1 ǫ � T 0 � ξ(t) ξ(t)−ǫ |φ(ρ(t, x)) − φ(γLρ(t))| dx dt = 0 AN EXISTENCE RESULT FOR HUGHES’ MODEL 7 � respectively, ess lim ǫ→0+ 1 ǫ � T 0 � ξ(t)+ǫ ξ(t) |φ(ρ(t, x)) − φ(γRρ(t))| dx dt = 0 � The existence of those traces is proven in [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Generalization of the approach of the present paper to general cost function c, for the original Hughes’ model, may require going below the Lipschitz regularity of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this respect, let us point out that extension of the above uniqueness claim to W 1,1 regularity of ξ is feasible, while weakening the regularity of ξ even more presents a serious difficulty for the theory of discontinuous-flux conservation laws [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remembering Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4 and for a more comprehensive presentation of the proof, we denote fR = f and fL = −f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' To main idea of the proof consists of using Kruzkhov’s doubling variable technique (see [14]) on each side of the curve {x = ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Since ξ is Lipschitz continuous we can join both pieces getting left and right traces along this turning curve, following the general approach as in [4, 8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We get, for any φ ∈ D+, (∗) − �� Ω |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ � T 0 φ(t, ξ(t)) [qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ)] where qL,R(ρ, ˆρ) := sign(ρ − ˆρ) � fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t)(ρ − ˆρ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' On another side, using traces’ existence, we also recover from (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8) the Rankine-Hugoniot condition: (∗∗ρ) for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' t ∈ (0, T ), fR(γRρ(t)) − ˙ξ(t)γRρ(t) = fL(γLρ(t)) − ˙ξ(t)γLρ(t) We also have the analogous relation for ˆρ that we denote (∗∗ˆρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Fix t ∈ (0, T ) such that (∗∗ρ) and (∗∗ˆρ) are true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote the set of values for γLρ (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' γRρ) that verify (∗∗ρ): ΓL,R := � a ∈ R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ∃b ∈ R, fL,R(a) − ˙ξ(t)a = fL,R(b) − ˙ξ(t)b � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Due to the particular choice of the pair of fluxes (fL, fR), those sets are non-empty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Its geometries are pictured below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ΓR ΓL y = fL(x) − ˙ξ(t)x y = fR(x) − ˙ξ(t)x Recalling the properties of fL and fR emphasized in Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4 and using the signs of f ′ L and f ′ R, we let the reader verify that, for any ˙ξ(t), x �→ fR(x) − ˙ξ(t)x has the same monotonicity on ΓR as x �→ fL(x) − ˙ξ(t)x on ΓL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, if (γLρ, γRρ) verifies (∗∗ρ) and (γL ˆρ, γRˆρ) verifies (∗∗ˆρ), sign(γRρ − γRˆρ) sign � fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRˆρ) � = sign(γLρ − γL ˆρ) sign � fL(γLρ) − fL(γL ˆρ) − ˙ξ(t)(γLρ − γLˆρ) � (∗∗ρ)-(∗∗ˆρ) implies that fR(γRρ) − fR(γRˆρ) − ˙ξ(t)(γRρ − γRv) = fL(γLρ) − fL(γLˆρ) − ˙ξ(t)(γLρ − γLˆρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 8 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD Therefore we have: for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' t ∈ (0, T ), qR(γRρ, γRˆρ) − qL(γLρ, γLˆρ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, from (∗), we recover the global Kato’s inequality: for any φ ∈ D+(Ω), − �� |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The remaining arguments are identical to the classical framework of Kruzkhov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Integrating on the trapezoid 1[0,t](s)1[a−Lf(t−s),b+Lf(t−s)](x), Lf being the Lipschitz constant of f, we get the localized L1 contraction property: (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) � b a |ρ(t, x) − ˆρ(t, x)|dx ≤ � b+Lft a−Lft |ρ(0, x) − ˆρ(0, x)|dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, the solver operator S0 is well defined from W 1,∞((0, T )) into L1((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to apply Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6 with D = S0 : � W 1,∞((0, T )), ∥ · ∥∞ � −→ � L1((0, T ) × R), ∥ · ∥L1((0,T )×R) � , we also show the continuity of this operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let’s denote for any a < b ∈ R, s < t ∈ [0, T ], the trapezoid: (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2) T s,t a,b := � (τ, x) ∈ (0, T ) × R s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' τ ∈ [s, t], x ∈ (a + (τ − s)Lf , b − (τ − s)Lf) � , where Lf is the Lipschitz constant of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We isolate the following useful lemma that comes from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3), ξ ∈ W 1,∞((0, T )) and ρ be the entropy solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7) on (0, T ) × R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Denote ˆρ the Kruzhkov entropy solution on (s, t) × R to 1 � ˆρt + f(ˆρ)x = 0 ˆρ(s, ·) = ρ(s, ·)1(a,b)(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, for any a < b ∈ R, s < t ∈ [0, T ], there holds (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) T s,t a,b ⊂ {x > ξ(t)} =⇒ ρ = ˆρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' on T s,t a,b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This lemma immediatly follows from (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now prove Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8 using this lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consider (ξn)n∈N and ξ ∈ W 1,∞((0, T )) such that ∥ξn − ξ∥∞ −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote ρn := S0(ξn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let K a compact subset of {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ǫ > 0 such that K ⊂ {x > ξ(t) + ǫ}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We cover K by a finite number of trapezoids of the form (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Without loss of generality we can suppose that each trapezoid is contained in {x > ξ(t) + ǫ}: K ⊂ � i∈I T si,ti ai,bi ⊂ {x > ξ(t) + ǫ} , Card(I) < +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Since ∥ξn − ξ∥∞ −→ 0, for any ǫ > 0, there exists n0 ∈ N such that ∀t ∈ [0, T ], n ≥ n0 ⇒ |ξn(t) − ξ(t)| ≤ ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This implies ξn(t) ∈ [ξ(t) − ǫ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ξ(t) + ǫ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, ∀x ∈ R\\[ξ(t) − ǫ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ξ(t) + ǫ] , sign(x − ξn(t)) = sign(x − ξ(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) Then, for such a n0, for any n ≥ n0, each trapezoid T si,ti ai,bi ⊂ {x > ξn(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3, for any n ≥ n0, ρn is equal almost everywhere in T si,ti ai,bi to the Kruzhkov entropy solution of: � ρt + f(ρ)x = 0 ρ(si, ·) = ρn(si, ·)1(ai,bi)(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1Here ρ(s, ·) is understood in view of s being a Lebesgue’s point of ρ ∈ L∞((0, T), L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Recalling Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3, this is in fact true for any s ∈ [0, T].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' AN EXISTENCE RESULT FOR HUGHES’ MODEL 9 We are now in a position to apply the averaging compactness lemma (see Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 in [19]) on the trapezoid T s0,t0 a0,b0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We get a subsequence (ρnk)k∈N that converges in L1(T s0,t0 a0,b0 ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We then apply the averaging compactness lemma with (ρnk)k on T s1,t1 a1,b1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Repeating this process for each i ∈ I, we recover a subsequence (ρnj)j that converges in L1(� i∈I T si,ti ai,bi ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then (ρnj)j converges in L1(K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' To conclude, we point out that this reasoning holds for any K ⊂ {x > ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This is also true for compact subsets of {x < ξ(t)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Since ξ is Lipschitz, meas({x = ξ(t)}) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently there exists a subsequence (ρnk) that converges almost everywhere on (0, T ) × R and in L1 loc((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Moreover, we have ρnk −→ ρ in L1((0, T ) × R) because for [a, b] ∩ [−1, 1] = ∅, ρn = 0 on T 0,T a,b , due to the choice of ρ0 verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Now, ρ is actually S0(ξ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Indeed, recall that ρ has no admissibility condition to satisfy on {x = ξ(t)} beyond the Rankine-Hugoniot relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, we can pass to the limit in the entropy inequalities (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9) (where, for n large enough, the support of the test function does not intersect the curve {x = ξn(t)} for t ∈ [0, T ]) and pass to the limit in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8) by dominated convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This reasoning can be reproduced for any subsequence of (ρn)n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Thanks to a classical argument of compacity, if any converging subsequence (S0(ξnk))k∈N converges to S0(ξ), the whole sequence (S0(ξn))n converges in L1 to S0(ξ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' So S0 : (W 1,∞((0, T )), ∥ · ∥∞) −→ (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now combine all the previous results to get existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Suppose there exists r > 0 such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b) are verified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using the notations of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6 we take: Y = (C0([0, T ]), ∥ · ∥∞) X = (L1((0, T ) × R), ∥ · ∥L1((0,T )×R)) K as the compact set of C0([0, T ]) obtained as the image of BW 1,∞(0, r) under the standard embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8 and Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, we know that S0 : (K, ∥ · ∥Y ) −→ (X, ∥ · ∥X) is well defined and continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Further, notice that condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a) is equivalent to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12a) and that condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b) implies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We are now in a position to use Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We conclude to the existence of a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Lipschitz continuity of the turning curve: examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this section, we will enumerate examples of the abstract problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))f(ρ)]x = 0 ρ(0, x) = ρ0(x) ξ = I(ρ), where we can construct a set B such that the prescribed operator I satisfies the required properties in order to apply Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' this includes the original Hughes’ model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) with affine costs and its modifications, taking into account time-inertia effects and allowing for general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Note that further examples, with modified exit conditions, are considered in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For such examples, we exhibit the construction of this set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, we get existence of a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5 in those situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Hughes’s model with affine cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We first consider the model (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4): \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(ρ(t, x))dx = � 1 ξ(t) c(ρ(t, x))dx, with initial datum satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) where we choose, for some α > 0, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) c(p) = 1 + αp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' First, let us recall the definition of the set B1 constructed in the introduction: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='17) B1 = � ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 0 ≤ ρ ≤ 1 and ρ verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='16) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 10 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD In this setup, we have the following proposition: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assume the cost is given by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then the following properties hold: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For any ξ ∈ W 1,∞((0, T )), S0(ξ) ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' There exists r > 0 such that, for any ρ ∈ B1, there exists a unique solution ξ ∈ BW 1,∞(0, r) to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote I0 the operator that maps ρ ∈ B1 to ξ the unique solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, this operator is well defined and monovaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' I0 : (B1, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞([0, T ]), ∥ · ∥∞) is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' B1 is closed convex and bounded in L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, I0 verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b) for the set B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We apply Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 and get the desired existence of a solution for the problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) with affine cost (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' That proves Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to prove of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, we rely on two lemmas that we chose to isolate in order to use them in the other examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let a, b ∈ R, a < b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let s, t ∈ [0, T ], s < t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Fix ξ ∈ W 1,∞((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote ρ a solution in the sense of Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, there exists C > 0, independent of a, b, s, t, ξ and ρ, such that: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ C|t − s|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We recall that there’s no ambiguity in considering ρ(t, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=') since ρ ∈ C0([0, T ], L1(R)) (see Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let (κn)n∈N be a mollifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We set Ψ(τ, x) := 1[a,b](x)1[s,t](τ) and φ(τ, x) := Ψ ∗ κn(τ, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using φ as test function in (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8), making n −→ +∞ we get: � b a ρ(s, x) − ρ(t, x) dx + � t s F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ = 0 Consequently, ����� � b a ρ(t, x) − ρ(s, x) dx ����� ≤ ���� � t s F(τ, a, ρ(τ, a)) − F(τ, b, ρ(τ, b)) dτ ���� ≤ � 2 sup p∈[0,1] |f(p)| � |t − s| Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let s < t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ be a solution to (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote ¯ ξ := min(ξ(t), ξ(s)) and ¯ξ := max(ξ(t), ξ(s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5) 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� Proof of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We first treat the case ξ(s) ≤ ξ(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We have: � ξ(s) −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx = � ξ(t) ξ(s) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx + � 1 ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx � ξ(s) −1 c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx = − � ξ(t) ξ(s) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx + � 1 ξ(t) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx If we substract both equalities,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' � ξ(t) ξ(s) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx = � ξ(s) −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx − � 1 ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx AN EXISTENCE RESULT FOR HUGHES’ MODEL 11 On the contrary,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' if ξ(s) ≥ ξ(t),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' with an analogous argument we get: � ξ(s) ξ(t) c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx = � ξ(t) −1 c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx − � 1 ξ(s) c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx Using the fact that c ≥ 1 we get: 2|ξ(t) − ξ(s)| = 2(¯ξ − ¯ ξ) ≤ � ¯ξ ¯ ξ c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) + c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx ≤ ����� � ¯ ξ −1 c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx − � 1 ¯ξ c(ρ(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ(t,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx ����� We are now ready to prove Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' First, consider ρ0 satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using ˆρ = 0 in (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1), we prove that for all t in [0, T ], ∥ρ(t, ·)∥L1(R) ≤ ∥ρ0∥L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This readily yields: ∥ρ∥L1([0,T ]×R) ≤ T ∥ρ0∥L1(R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='15) Combining this result with Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2, we prove the first assertion of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Second, fix ρ ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We prove existence and uniqueness of ξ ∈ L∞([0, T ]) satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) for any t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let t ∈ [0, T ], we set: Ψ+(a) := � a −1 c(ρ(t, x)) dx, Ψ−(a) := � 1 a c(ρ(t, x)) dx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' One can notice that, because c > 0, Ψ+ is a continuous strictly increasing function, while Ψ− is continuous and strictly decreasing on [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Therefore, a �→ Ψ+(a) − Ψ−(a) is continuous, strictly increasing, negative at a = −1 and positive at a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, there exists only one ˜a ∈ (−1, 1) such that Ψ+(˜a) = Ψ−(˜a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This can be done for any t ∈ [0, T ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, we get existence and unicity of ξ ∈ L∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now prove that ξ ∈ W 1,∞([0, T ]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3 we get: 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� ≤ α ����� � ¯ ξ −1 ρ(t, x) − ρ(s, x) dx ����� + α ���� � 1 ¯ξ ρ(t, x) − ρ(s, x) dx ���� And using Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2, with the choice (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) of the cost, we get: 2 |ξ(t) − ξ(s)| ≤ 2αC |t − s| We conclude that taking r = αC, one guarantees that ξ is always in BW 1,∞(0, r).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now prove the continuity of the operator I0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let’s consider ρ, ρn ∈ B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, for a given t ∈ [0, T ], using (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b) for both ξ := I0(ρ) and ξn := I0(ρn), we recover: � ξ(t) ξn(t) c(ρ) + � ξn(t) −1 c(ρ) − � ξn(t) −1 c(ρn) = � ξn(t) ξ(t) c(ρ) + � 1 ξn(t) c(ρ) − � 1 ξn(t) c(ρn) And rearranging the integrals, we get: 2 � ξ(t) ξn(t) c(ρ) = � 1 −1 [c(ρ) − c(ρn)] sign(x − ξn(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 12 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD Notice that � T 0 |ξ − ξn| ≤ � T 0 ����� � ξn(t) ξ(t) c(ρ) ����� ≤ 1 2 � T 0 ���� � 1 −1 sign(x − ξn(t)) [c(ρ) − c(ρn)] ���� ≤ 1 2 � T 0 � 1 −1 |c(ρ) − c(ρn)| ≤ α 2 � T 0 � 1 −1 |ρ − ρn| .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, if ∥ρ − ρn∥L1((0,T )×R) −→ 0, ∥ξ − ξn∥L1((0,T )) −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We recall, that ξ, ξn ∈ I0(B1) are r-Lipschitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' On any open subset of [0, T ] there exists a point t where the continuous function ξ(·) − ξn(·) is less or egal to its L1-average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using the fact that [0, T ] can be covered by a finite ǫ-network and that the derivative of ξ(·) − ξn(·) is bounded on this network, we recover that ∥ξ − ξn∥∞ −→ 0 when ∥ρ − ρn∥L1((0,T )×R) −→ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This proves the third point of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Eventually, let ρ1, ρ2 ∈ B1, λ ∈ [0, 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' it is readily checked that λρ1 + (1 − λ)ρ2 still satisfies (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then B1 is convex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' It is also readily checked that we can pass to the L1((0, T ) × R) limit in (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4), proving that B1 is closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' By construction B1 is bounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' That ends the proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The general cost case evaluated for a subjective density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In the same setup (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4), let’s further prospect the situation for a cost function c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Most of the items of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 hold with the set B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The first point is independent of the nature of c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The third point proof still holds with general cost if the second point holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of existence and unicity of ξ ∈ L∞((0, T )) is still valid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In fact, the main issue lies in proving that ξ is Lipschitz for any ρ in a given set B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to explore this issue, let’s start from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3 estimate (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5): 2 |ξ(t) − ξ(s)| ≤ ����� � ¯ ξ −1 c(ρ(t, x)) − c(ρ(s, x)) dx − � 1 ¯ξ c(ρ(t, x)) − c(ρ(s, x)) dx ����� Recall that c satisfies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We set ¯α := esssupu∈[0,1] c′(u), ¯α := essinfu∈[0,1] c′(u) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using the negative and positive parts of (ρ(t, ·) − ρ(s, ·)), rearranging the terms we get the following estimate: 2 |ξ(t) − ξ(s)| ≤ � ¯α + ¯α 2 � ����� � ¯ ξ −1 ρ(t, x) − ρ(s, x) dx − � 1 ¯ξ ρ(t, x) − ρ(s, x) dx ����� + � ¯α − ¯α 2 � � 1 −1 |ρ(t, x) − ρ(s, x)| dx =: I1 + I2 (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) The first term I1 of the right member is controlled by the estimate of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The issue lies in controlling the second term I2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This suggests that, in order to prove that ξ ∈ W 1,∞((0, T )) we need an estimate of the modulus of continuity of ρ as an element of C0([0, T ], L1(R)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' While the standard Oleinik regularizing effect can be used locally away from the turning curve (see [5]), in a vicinity of the turning curve the spatial variation of ρ may not be controlled;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' moreover, (ir)regularity of the turning curve itself impacts the modulus of continuity of ρ, making it an open question how to control time variations of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We leave this issue for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' However, we can treat a natural modification of problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) for which the method applied for the affine cost (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) extends to general costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let R : L1((−∞, T )) −→ L1((0, T )) be the operator defined by: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7) R[ρ(·, x)](t) := δ � t −∞ ρ(s, x)e−δ(t−s) ds To make this operator well defined, we extend ρ by ρ(t) = ρ0 for any t ∈ [−∞, 0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This model corresponds to a memory effect in individual’s perception of the density;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' R[ρ] is a subjective density perceived by an agent AN EXISTENCE RESULT FOR HUGHES’ MODEL 13 making decision to move towards the most appropriate exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Thus, we consider the problem: (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4a) (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b’) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 ρt + [sign(x − ξ(t))ρv(ρ)]x = 0 � ξ(t) −1 c(R[ρ(·, x)](t))dx = � 1 ξ(t) c(R[ρ(·, x)](t))dx, with c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5), and with initial datum satisfying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b’) takes into account the average density over the recent past instead of the instantaneous density at a time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This models the bias, due to some inertia of human thinking, towards perception of the density for the pedestrians in the corridor;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' the quantity R[ρ(·, x)] can be compared to other “subjective densities” used in the literature (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [10], [8, 7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' With the same calculations as (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6), we recover the term I2 = � 1 −1 ���R[ρ(·, x)](t) − R[ρ(·, x)](s) ��� dx, which is controlled by 2δ∥ρ∥L∞|t−s|, a bound for the modulus of continuity of R[ρ(·, x)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For I1 we can pass the absolute value inside the integral.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then I1 is also controlled by the modulus of continuity of R[ρ(·, x)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Notice that we don’t need the property (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='16) for this reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, we define: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9) B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 0 ≤ ρ ≤ 1} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, Iδ : (B2, ∥ · ∥L1((0,T )×R)) −→ (W 1,∞((0, T )), ∥ · ∥∞), ρ �→ ξ where ξ is defined by (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b’) with R given by (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7), is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The analogue of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 - where we use Iδ instead of I0, we use B2 instead of B1 and we drop the assumption of affine cost - is easily justified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In particular, the proof for the third item of this analogue of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 holds with these choices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Thus, without the restriction (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) on the cost, we have the following claim: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b’) admits at least one solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The general cost case with relaxed equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We consider (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) with a modified equi- librium equation (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This time, we suppose that collective behavior of pedestrians makes appear some amount of inertia in the dynamics of ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Fixing ǫ > 0, we consider as a simplest variant of such dynamics the ODE Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10a) (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10b) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 −ǫ ˙ξ(t) = � 1 ξ(t) c(ρ(t, x))dx − � ξ(t) −1 c(ρ(t, x))dx � 1 ξ(0) c(ρ0(x))dx − � ξ(0) −1 c(ρ0(x))dx = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' for the ρ-driven evolution of the turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Formally, the case ǫ = 0+ corresponds to the standard Hughes’s relation between the density and the turning curve;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ǫ > 0 models a form of relaxation to the equilibrium given by this standard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The primitive form of the Hughes’ model, where the position of the turning curve is determined by an instantaneous Hamilton-Jacobi equation, should be modified to fit this dynamics of the turning curve;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' this modeling issue will be discussed elsewhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ ∈ L1((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let c verifying the conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' There exists a unique solution ξ to the Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Furthermore, ξ is Lipschitz and the Lipschitz constant is independent of ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let’s denote: Ψ(t, a) := 1 ǫ �� 1 a c(ρ(t, x))dx − � a −1 c(ρ(t, x))dx � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Notice that for any a, b ∈ [−1, 1], t ∈ R, |Ψ(t, a) − Ψ(t, b)| ≤ 1 ǫ ����� � b a 2c(ρ(t, x)) dx ����� ≤ 2∥c∥∞ ǫ |a − b|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11) 14 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD We also have, for any ξ such that ∥ξ∥∞ ≤ 1: |Ψ(t, ξ(t))| ≤ 1 ǫ ���� � 1 −1 sign(x − ξ(t))c(ρ(t, x)) dx ���� ≤ 2∥c∥∞ ǫ So Ψ is Lipschitz with respect to the a variable and uniformly bounded with respect to the t variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We apply the Cauchy-Lipschitz Theorem and recover that there exists a unique local solution to the Cauchy problem (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='11), we recover that the solution is global on [0, T ] and that ξ is Lipschitz;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' moreover, the Lipschitz constant of ξ does not depend on ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' From Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5, it follows that �Iǫ : L1((0, T ) × R, [0, 1]) −→ W 1,∞((0, T )) that maps any to ρ to the unique ξ solution to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10) is well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ1, ρ2 ∈ L1((0, T ) × R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let’s denote ξ1,2 := �Iǫ(ρ1,2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='12) ∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ ǫ exp �2T ∥c∥∞ ǫ � ∥ρ1 − ρ2∥L1((0,T )×(−1,1)) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote ξ0 the unique solution to (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' for any t ∈ [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' T ]: ξ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2 = ξ0 − � t 0 Ψ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ξ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2(s)) ds Then,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' writing ∨,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ∧ for min,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' max,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' repsectively,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' we make the following calculations: ξ2(t) − ξ1(t) = � t 0 Ψ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ξ1(s)) − Ψ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ξ2(s)) ds = 1 ǫ � t 0 �� ξ1(s) −1 c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx − � 1 ξ1(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx − � ξ2(s) −1 c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx + � 1 ξ2(s) c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx � ds = 1 ǫ � t 0 �� (ξ1∨ξ2)(s) −1 c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx ± � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) + c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx + � 1 (ξ1∧ξ2)(s) c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx � ds And consequently,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' |ξ1(t) − ξ2(t)| ≤ 1 ǫ � t 0 � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) + c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) dx ds + 1 ǫ � t 0 � 1 −1 |c(ρ1(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x)) − c(ρ2(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' x))| ds dx =: J1 + J2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For the term J2 we can use the Lagrange inequality denoting ∥c′∥∞ := supp∈[0,1] |c′(p)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We get: J2 ≤ ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1((0,T )×(−1,1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For the the term J1, notice that, thanks to the cost conditions (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5), for any s ∈ [0, t], 2|ξ1(s) − ξ2(s)| ≤ � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s, x)) + c(ρ2(s, x)) dx ≤ 2∥c∥∞|ξ1(s) − ξ2(s)| AN EXISTENCE RESULT FOR HUGHES’ MODEL 15 Consequently for any s ∈ [0, T ], there exists β(s) ∈ [2 , 2 ∥c∥∞] such that � (ξ1∧ξ2)(s) (ξ1∨ξ2)(s) c(ρ1(s, x)) + c(ρ2(s, x)) dx = β(s)|ξ1(s) − ξ2(s)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then β ∈ L∞((0, T )) ⊂ L1((0, T )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We are now in a position to use Gronwall’s inequality with integrable coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' That inequality still holds without the continuity of β if we use the Lebesgue differentiation Theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We thus reach to |ξ1(t) − ξ2(t)| ≤ � t 0 β(s) ǫ |ξ1(s) − ξ2(s)| ds + ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1 which yields the subsequent estimates |ξ1(t) − ξ2(t)| ≤ ∥c′∥∞ ǫ ∥ρ1 − ρ2∥L1 exp �� t 0 β(s) ǫ ds � , ∥ξ1 − ξ2∥∞ ≤ ∥c′∥∞ ǫ exp �2T ∥c∥∞ ǫ � ∥ρ1 − ρ2∥L1 Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' One can check that, in the relaxed equilibrium setting, we never used any property of ρ apart from the universal bounds 0 ≤ ρ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, in this case we also use: (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9) B2 = {ρ ∈ BL1(0, T ∥ρ0∥L1) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 0 ≤ ρ ≤ 1} Here’s the final result in this relaxed equilibrium setting: Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 satisfy (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let c verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then problem (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b)-(3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10) admits at least one solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We only have to apply Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9 with B2 as a B set and check that, using Propositions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7, all the assumptions on �Iǫ are satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Hughes’ model with constrained evacuation at exit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In this section, we illustrate the robust- ness of our approach by modifying the Hughes model at the level of boundary conditions for the density, allowing for the realistic feature of capacity drop (see [8, 7] and references therein).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We consider the following dynamics for ρ introduced in [8] on the basis of the theory of [11, 3]: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1a) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1b) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1c) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1d) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 ρt+ [sign(x − ξ(t))f(ρ)]x = 0 f(ρ(t, 1)) ≤ g �� 1 σ w1(x)ρ(t, x) dx � f(ρ(t, −1)) ≤ g �� −σ −1 w−1(x)ρ(t, x) dx � ρ(0, ·) = ρ0(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The equations (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1b)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1c) prescribe the behaviour at exits situated at x = ±1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' as in previous sections, we set up the conservation law for ρ in the whole space, but the initial condition (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) is confined to the domain of interest (−1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The flux f(ρ) of pedestrian going through the exits is limited by respective constraints (we take a common nonlinearity g for the sake of conciseness, but it is straightforward to extend the setting distinguishing g1 and g−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This flux limiter g depends non locally of ρ(t, ·) and of a weight w supported in a vicinity of length 1 − σ around the exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This type of constraint models the well-known phenomenon of capacity drop which, in extreme situations, corresponds to a panic behaviour at exits located at x = ±1, as discussed in [8] and [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This model, allowing to consider constrained evacuation at exits, is phenomenologically more relevant than the model with open-end condition considered above (and it includes the previous model, for the trivial choice g ≡ max[0,1] f, see Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' As an example, this constrained evacuation model is able to reproduce the “Faster is Slower” effect at exits (see [7]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In the following, we’ll use the results of [7] and adapt them to our framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We use the notations proposed in this paper: 16 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD Since f is concave positive such that f(0) = f(1), there exists a ¯ρ ∈ [0, 1] such that f ′(ρ)(¯ρ − ρ) > 0 for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ρ ∈ [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We fix σ ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This is the threshold of influence on the exit, meaning that the pedestrian located before x = σ have no influence on the exit congestion at x = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let us take the strongest assumptions used in [8, 7]: � w1 ∈ W 1,∞((σ, 1], R+) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' � 1 σ w1 = 1 w−1 ∈ W 1,∞([−1, −σ), R+) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' � −σ −1 w−1 = 1 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2) g ∈ W 1,∞(R+, (0, f(¯ρ)]) is non-increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) We can now introduce the notion of solution we’ll use for ρ combining the one in [11] and Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1: Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 ∈ L1(R, [0, 1]) supported in [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let f be a con- cave positive flux such that f(0) = 0 = f(1) and F(t, x, ρ) := sign(x − ξ(t))f(ρ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let g, ω−1 and ω1 satisfy (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We say that ρ ∈ L1((0, T ) × R) is an admissible solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) if: for all φ ∈ C∞ c ((0, T ) × R), (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) �� (0,T )×R ρφt + F(t, x, ρ)φx dt dx = 0, moreover, setting Q−1(t) := g �� −σ −1 w−1(x)ρ(t, x) dx � , Q1(t) := g �� 1 σ w1(x)ρ(t, x) dx � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5) there holds: For all positive φ ∈ C∞ − c({x > ξ(t)}), for all k ∈ R, − �� (0,T )×R |ρ − k| φt + q(ρ, k)φx dt dx − 2 � T 0 � 1 − Q1(t) f(¯ρ) � f(k)φ(t, 1) dx − � R |ρ0 − k|φ(0, x) dx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6) For all positive φ ∈ C∞ c ({x < ξ(t)}), for all k ∈ R, − �� (0,T )×R |ρ − k| φt + q(ρ, k)φx dt dx − 2 � T 0 � 1 − Q−1(t) f(¯ρ) � (−f(k)) φ(t, −1) dx − � R |ρ0 − k|φ(0, x) dx ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7) For all positive φ ∈ C∞ supported on [a, b] such that a < −1, 1 < b we have: � T 0 � −1 a ρφt + F(t, x, ρ)φx dt dx ≤ � T 0 Q−1(t)φ(t, −1) dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8a) � T 0 � b 1 ρφt + F(t, x, ρ)φx dt dx ≤ � T 0 Q1(t)φ(t, 1) dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8b) Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' As detailled in [3], equations (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8) combined with the weak solution property (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) imply that for a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' t ≥ 0, f(γ1 L,Rρ(t)) ≤ Q1(t) and −f(γ−1 L,Rρ(t)) ≥ −Q−1(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This corresponds to the expected limited flux condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Remark 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' One can notice that if for all t ≥ 0, g(t) = f(¯ρ) then the flux is not limited at exits and 1 − Q1(t) f(¯ρ) = 1 − Q−1(t) f(¯ρ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, this definition is exactly Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We have the following results: AN EXISTENCE RESULT FOR HUGHES’ MODEL 17 Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' There exists a solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4 is postponed to the Appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' It is obtained via a convegent finite volume scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The details of the scheme and the proof of convergence can be found there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using the results from [11], [7], [8] and a partitionning argument we prove a corollary of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8: Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ξ ∈ W 1,∞((0, T ), (−1, 1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' There exists at most one solution ρ of (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4, the solver operator Sg : (W 1,∞((0, T ), (−1, 1)), ∥ · ∥∞) −→ (L1((0, T ) × (−1, 1)), ∥ · ∥L1), that maps any ξ to the unique solution ρ to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) is well defined and continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We use of the classical embedding of W 1,∞( [0, T ], (−1, 1)) into C0([0, T ], (−1, 1)): there exists K a closed segment of (−1, 1) such that ξ ∈ C0([0, T ], K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We consider (φi)i∈{−1,0,1} a partition of the unity of an open set containing [−1, 1] such that: All the supports are segments and 1 ∈ supp(φ1), −1 ∈ supp(φ−1) and K ⊂ supp(φ0) ⊂ (−1, 1) [supp(φ−1) ∪ supp(φ1)] � K = ∅ Let ρ, ˆρ be two solutions in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote ˆQ1,−1 the constraints associated with ˆρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let Ψ ∈ C∞ c ((0, T )×R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We use the classic Kruzkhov doubling of variables (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [14]) in the open subdomains of (0, T ) × R situated between x = −∞ and x = −1, x = −1 and x = ξ(t), x = ξ(t) and x = 1, and finally between x = 1 and x = +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then by a limiting procedure analogous to the one employed in the proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, we obtain the Kato inequality carrying singular terms concentrated on the three curves {x = ξ(t)}, {x = 1} and {x = −1}: − �� (0,T )×(−1,1) |ρ − ˆρ|φt + q(ρ, ˆρ)φx ≤ � T 0 Ψ(t, ξ(t)) (φ0 + φ−1 + φ1) (t, ξ(t)) � q0 R(γRρ, γRˆρ) − q0 L(γLρ, γL ˆρ) � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9a) + � T 0 Ψ(t, 1)φ1(t, 1) � q1(γRρ, γRˆρ) − q1(γLρ, γLˆρ) � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9b) + � T 0 Ψ(t, −1)φ−1(t, −1) � q−1(γRρ, γRˆρ) − q−1(γLρ, γLˆρ) � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9c) where the left and right traces are taken along their respective curves, and q0 L,R(ρ, ˆρ) := sign(ρ − ˆρ) � fL,R(ρ) − fL,R(ˆρ) − ˙ξ(t) (ρ − ˆρ) � q1(ρ, ˆρ) := sign(ρ − ˆρ) [fR(ρ) − fR(ˆρ)] q−1(ρ, ˆρ) := sign(ρ − ˆρ) [fL(ρ) − fL(ˆρ)] .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Referring to proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, the integral (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9a) is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Using the same argument as the proof of Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10 in [3], we get: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9b) ≤ 2 � T 0 Ψ(t, 1) ���Q1(t) − ˆQ1(t) ��� dt (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9c) ≤ 2 � T 0 Ψ(t, −1) ���Q−1(t) − ˆQ−1(t) ��� dt As in the proof of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, we integrate (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9) along a trapezoid T 0,t a,b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then we use the definition of Q±1, ˆQ±1 with Lg the Lipschitz constant of g to get the following inequality: ∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft)) + 2 � t 0 � 1 −1 Lg � 1(−1,−σ)ω−1 + 1(σ,1)ω1 � |ρ − ˆρ| dx ds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 18 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD Eventually, using Holder’s inequality and Gronwall’s Lemma, we get: (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='10) ∥ρ(t, ·) − ˆρ(t, ·)∥L1((a,b)) ≤ ∥ρ0 − ˆρ0∥L1((a−Lft,b+Lft))eCt, where C := 2Lg∥1(−1,−σ)ω−1 + 1(σ,1)ω1∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, there is at most one solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 associated to a fixed ξ turning curve and an initial datum ρ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to recover the continuity of the operator Sg we proceed the same way as we proved Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We first cover any compact set contained in {ξ(t) < x < 1} by trapezoids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Without loss of generality, we can suppose those trapezoids are at distance at least ǫ of the both interfaces {x = ξ(t)} and {x = 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Consequently, on any trapezoid, for all n ≥ n0, ρn is a Kruzhkov entropy solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We recover compacity thanks to the averaging compactness lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This reasoning can be reproduced in the three other parts of the domain: {x < −1}, {−1 < x < ξ(t)} and {x > 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Then, we can pass to the limit via dominated convergence in equation (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4) and in all the inequalities (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='7)-(4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We conclude the proof with the same classical arguments as the proof of Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' That ends the proof of Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We are ready to state the main result of this section which is an analog of Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let ρ0 verify (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Assume that f verifies (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let g (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ω1,−1) satisfy (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3) (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let B a convex closed bounded subset of L1((0, T ) × R) and I : (B, ∥ · ∥L1((0,T )×R)) −→ (C0([0, T ], R), ∥ · ∥∞) be a continuous operator such that ∀ρ ∈ B, ∀t ∈ [0, T ], I[ρ](t) ∈ (−1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' If there exists r > 0 such that (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14a)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='14b) hold, then there exists (ρ, ξ) a solution to the problem (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6b)-(1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='6c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Here ρ is a solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In particular, existence is verified for I = I0 (for affine cost) or with I = Iδ or �Iǫ (for general cost verifying (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Convergence of the finite volume scheme in the constrained case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In order to prove existence of a solution to (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1) in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1, we construct a converging finite volume scheme adapted around the fixed turning curve ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' At the exits we use an operator splitting method with a scheme for the constraints Q1 and Q−1 as in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We now present the scheme used in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Let T, J ∈ N such that: (CFL) 2 � ∥f ′∥∞ + ∥ ˙ξ∥∞ � J T ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We construct the following scheme: ∆t = 1 T , tn := n∆t, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1a) ∆x = 1 J , xj = j∆x, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1b) sn := 1 ∆t � tn+1 tn ˙ξ(s) ds, s∆(t) := N � 1 1[tn,tn+1)(t)sn, (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1c) ξ∆(t) := ξ(0) + � t 0 s∆(s) ds, ξn = ξ∆(tn).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1d) The discretization (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1c)-(A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1d) of the ξ interface is detailled in [22] Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 where it is required to construct the adapted mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For any n, we denote jn the unique element of �−J, J� such that ξn ∈ [xjn, xjn+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We construct the following mesh: χn j := \uf8f1 \uf8f2 \uf8f3 xj if j ≤ jn − 1 yn if j = jn xj if j ≥ jn + 1 Pn j+1/2 := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 (χn j , χn j+1) × (tn, tn+1) if j ≤ jn − 2 the trapezoid χn jn−1 χn+1 jn−1 χn+1 jn+1 χn jn if j = jn − 1 the trapezoid χn jn χn+1 jn+1 χn+1 jn+2 χn jn+2 if j = jn (χn j+1, χn j+2) × (tn, tn+1) if j ≥ jn + 1 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1e) AN EXISTENCE RESULT FOR HUGHES’ MODEL 19 Notice that, thanks to the (CFL) condition, xjn−1 < ξn+1 < xjn+2 so the trapezoids defined above are never reduced to a triangle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We denote Pn j+1/2 (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Pn j+1/2) the bottom (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' top) segment of the tapezoid Pn j+1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' However, now that the mesh is modified we have two different partitions for the line t = tn+1: (Pn+1 j+1/2)j∈Z and (Pn j+1/2)j∈Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We define (¯ρn+1 i+1/2)i∈Z corresponding to the values of ρn+1 on (Pn i+1/2)i∈Z and (ρn j+1/2)j∈Z the projection of this values on (Pn j+1/2)j∈Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ¯ρn+1 j+1/2 = ρn j+1/2 ����Pn j+1/2 ���� − ∆t(f n j+1 − f n j ) ����Pn j+1/2 ���� (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1f) ρn+1 j+1/2 := 1 ����Pn+1 j+1/2 ���� � i∈Z ����Pn+1 j+1/2 � Pn i+1/2 ���� ¯ρn+1 i+1/2 (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1g) ρ∆(t, x) := N � n=0 � j ∈ Z j ̸= jn ± 1 ρn j+1/2 1Pn j+1/2(t, x) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1h) We now want to define the numerical fluxes (f n j )j∈Z corresponding to the left and right edges of the trape- zoids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' It is worth noticing that we skipped f n jn+1 when we constructed the mesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We first define the non-local constraint approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ρn ∆x(·) = � j∈Z ρn j+1/21[χn j ,χn j+1)(·) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1i) qn 1 := g1 �� 1 σ ρn ∆x(x)ω1(x) dx � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1j) qn −1 := g−1 �� −σ −1 ρn ∆x(x)ω−1(x) dx � (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1k) F(ρn j−1/2, ρn j+1/2) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 min � Godf(ρn j−1/2, ρn j+1/2) , qn 1 � if j − 1 = J max � God−f(ρn j−1/2, ρn j+1/2) , −qn −1 � if j = −J Fn int(ρn j−1/2, ρn j+1/2) if j = jn Godf(ρn j−1/2, ρn j+1/2) if j > jn and j − 1 ̸= J God−f(ρn j−1/2, ρn j+1/2) if j < jn and j ̸= −J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1l) Eventually, we define Fn int as in [6] (see details in Subsections 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='3 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1): f n L,R(ρ) := ±f(ρ) − snρ ∀(ρL, ρR) ∈ [0, 1]2, ∃k ∈ [0, 1] s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Godf n L(ρL, k) = Godf n R(k, ρR) Fn int(ρn j−1/2, ρn j+1/2) := Godf n L(ρn j−1/2, k) = Godf n R(k, ρn j+1/2) (A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1m) Numerical simulations with for this scheme can be found in [6, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1] for the case of open-end condition at exits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We are now in a position to start the proof of convergence, which merely assembles with the help of the partition-of-unity technique of [22, 6] the arguments from [6] (for the inner interface situated at x = ξ(t) and [7] (for the constraints set at x = ±1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Proof of Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' The proof follows the general idea of [22, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 4], see also [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Since the interfaces {x = −1}, {x = ξ(t)} and {x = 1} are non-intersecting, we isolate them in the supports of a partition of unity φ−1, φ0 and φ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We fix a test function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Taking (the discretization of) the test function φ0φ we can 20 B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ANDREIANOV, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' GIRARD use the specific result for the Hughes’ model treated in [6, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1] to recover the approximate entropy inequalities satisfied by the discrete solution, with the test function φ0φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' For test functions φ−1φ and φ1φ, we use in the same way the result of [7, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Summing up the contributions of the three parts of the partition of unity, we obtain approximate entropy inequality for the discrete solution, with arbitrary test function φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' In addition, the integral weak formulation for the approximate solution follows from the scheme’s conservativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We use the same compactness argument as in [22, Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' We can pass to the limit in the approximate weak formulation and in the approximate entropy inequalities, for the chosen converging subsequence and arbitrary test function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This allows us to characterize the limit as an entropy solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1 of the problem at hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Finally, thanks to the uniqueness proven in Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='5, the whole sequence of discrete solutions converges to the unique solution in the sense of Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' This paper has been supported by the RUDN University Strategic Academic Leader- ship Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' REFERENCES [1] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Amadori and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Di Francesco, The one-dimensional hughes model for pedestrian flow: Riemann—type solutions, Acta Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' B Engl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 32 (2012), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 259–280.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Amadori, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Rosini, Existence results for hughes’ model for pedestrian flows, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 420 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 387–406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [3] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Seguin, Finite volume schemes for locally constrained conservation laws, Numer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' (Heidelb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ), 115 (2010), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 609–645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Karlsen, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Risebro, A theory of L1-dissipative solvers for scalarconservation laws with discontinuous flux, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 201 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 27–86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [5] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Rosini, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Stivaletta, On existence, stability and many-particle approximation of solutions of 1D Hughes model with linear costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' working paper or preprint, July 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [6] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Sylla, Finite volume approximation and well-posedness of conservation laws with moving interfaces under abstract coupling conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' submitted, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [7] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Donadello, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Razafison, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Rosini, Qualitative behaviour and numerical approxi- mation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics at the bottlenecks, Mathematical Modelling and Numerical Analysis, 50 (2015), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1269–1287.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [8] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Andreianov, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Donadello, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Rosini, Crowd dynamics and conservation laws with nonlocal constraints and capacity drop, Mathematical Models and Methods in Applied Sciences, 24 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 2685–2722.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [9] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Cancès and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Gallouët, On the time continuity of entropy solutions, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Evol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 11 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 43–55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Carrillo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Martin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Wolfram, An improved version of the hughes model for pedestrian flow, Mathematical Models and Methods in Applied Sciences, 26 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 671–697.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [11] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Colombo and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin, A well posed conservation law with a variable unilateral constraint, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 234 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 654–675.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [12] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Di Francesco, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Markowich, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Pietschmann, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Wolfram, On the hughes’ model for pedes- trian flow: The one-dimensional case, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Differ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Equ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 250 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 1334–1362.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [13] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' El-Khatib, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Rosini, On entropy weak solutions of hughes model for pedestrian motion, Zeitschrift für angewandte Mathematik und Physik, 64 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 223–251.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [14] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Evans, Partial Differential Equations, Graduate Studies in Mathematics, American Mathematical Society, Provi- dence, RI, May 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [15] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Mimault, The wave-front tracking algorithm for hughes’ model of pedestrian motion, SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 35 (2013), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' B606–B622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [16] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Gomes and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Velho, On the hughes model and numerical aspects, (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [17] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B-methodological, 36 (2002), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 507–535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [18] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Lighthill and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Whitham, On kinematic waves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' a theory of traffic flow on long crowded roads, Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 229 (1955), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 317–345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [19] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Perthame, Kinetic formulation of conservation laws, Oxford Lecture Series in Mathematics and its Applications, Clarendon Press, Oxford, England, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [20] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Richards, Shock waves on the highway, Operations research, 4 (1956), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 42–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Sylla, Influence of a slow moving vehicle on traffic: Well-posedness and approximation for a mildly nonlocal model, Networks and Heterogeneous Media, 16 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Sylla, A lwr model with constraints at moving interfaces, ESAIM: Mathematical Modelling and Numerical Analysis, 56 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [23] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Twarogowska, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Goatin, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Duvigneau, Numerical study of macroscopic pedestrian flow models, (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [24] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Vasseur, Strong traces for solutions of multidimensional scalar conservation laws, Arch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Ration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Mech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', 160 (2001), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 181–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' [25] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' Zeidler, Applied functional analysis, Applied mathematical sciences, Springer, New York, NY, 1995 ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=', Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/uNE5T4oBgHgl3EQfLQ6L/content/2301.05472v1.pdf'} diff --git a/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf b/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2f7e3c3e2e56d115e34eafb829adf04a7e89305 --- /dev/null +++ b/v9AyT4oBgHgl3EQfaffQ/content/2301.00245v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f71e4d6acb6464306a2ac78359c596fe2132abbb63e0e2743dbca3bdaa162a4c +size 2417344 diff --git a/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss b/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..156423a5cafb9a2f0fb95d17ec239281680d8f6a --- /dev/null +++ b/v9AyT4oBgHgl3EQfaffQ/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:933b05a8caa011dbe97b300205f6b4a19a64315dbdced6281c4a3e8210ddca16 +size 5505069 diff --git a/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf b/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..814d7fcbdc66dff1f1e43447301144b05f58f252 --- /dev/null +++ b/v9E2T4oBgHgl3EQf2wjd/content/2301.04165v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63698d852ee48295214fc6bd2b60666c26d240e01e3039af3e231fef4263ee74 +size 1040639 diff --git a/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..0728b6fe25df1ff90edbcc4ddbdfdd349f13957d --- /dev/null +++ b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5929f06328c44194cc28da6bd9ec2a2599292157ea9eeffd9282a5da9b1e8b3 +size 2555949 diff --git a/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..53756ffe107a404c46abf0473f8a8db2c9ab080a --- /dev/null +++ b/v9E2T4oBgHgl3EQf2wjd/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:662c3f3aa2ef0a97a7ea0bb207de255d1132ec308ec539baeef0947b6743da1f +size 85874 diff --git a/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..1185caeaa90dd76e8094563ade46e09b627ac3b8 --- /dev/null +++ b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/2301.01193v1.pdf.txt @@ -0,0 +1,919 @@ +Springer Nature 2021 LATEX template +Measuring the diversity of data and metadata in digital +libraries +Rafael C. Carrasco1*, Gustavo Candela1 and Manuel Marco-Such1 +1*Departmento de Lenguajes y Sistemas Inform´aticos, Universidad de Alicante, Carretera +San Vicente del Raspeig s/n, San Vicent del Raspeig, 03690, Alicante, Spain. +*Corresponding author(s). E-mail(s): carrasco@ua.es; +Contributing authors: gcandela@ua.es; marco@dlsi.ua.es; +Abstract +Diversity indices have been traditionally used to capture the biodiversity of ecosystems by +measuring the effective number of species or groups of species. In contrast to abundance, +which +is +correlated +with +the +amount +of +data +available, +diversity +indices +provide +a +more +robust indicator on the variability of individuals. These types of indices can be employed +in +the +context +of +digital +libraries +to +identify +trends +in +the +distribution +of +topics, +com- +pare the lexica employed by different authors or analyze the coverage of semantic metadata. +Keywords: Metadata, Digital Libraries, Open Data, Collections as Data +1 Introduction +Richness, usually defined as the number of species +present in an ecosystem, provides a limited picture +of its biodiversity as it weights all groups equally, +regardless their relative abundances. In contrast, +diversity indices [5] are numerical estimators that +measure both richness and evenness by giving +more relevance to abundant species. They there- +fore provide an effective number of species which +is more robust than the sample size, due to the +smaller contribution of rare, possibly undetected, +cases. +As digital libraries become more readily avail- +able, there is an increasing need to explore which +bibliometric measures could make their features +easier to understand. It has been argued [8] that +diversity indices could effectively disentangle the +correlation between richness and data volume. +The purpose of this paper is therefore to analyze +how diversity indices could assist researchers and +professionals in evaluating the lexical diversity of +the content as well as the metadata coverage in +digital collections. +As regards textual content, the type-token ratio +(TTR) has been traditionally employed to mea- +sure the lexical diversity of documents. The TTR +is computed as the number of different words +(types) divided by the number of words (tokens) +in the text. For example, previous works com- +pare different approaches, including MLTD [9] and +vocd [10], to evaluate TTR and its variability +within a sample. Some researchers [7] have also +explored whether genres could be characterized by +specific TTR probability distributions. +Previous +research +has +suggested +applying +diversity indices to evaluate the lexical richness +of documents +[6]. But other features of digital +libraries could also benefit from analysis using +diversity concepts. For example, the local and +1 +arXiv:2301.01193v1 [cs.DL] 3 Jan 2023 + +Springer Nature 2021 LATEX template +2 +Measuring the diversity of metadata +temporal variations in the coverage of topics or +authors could be better examined by computing +diversity indices, as they are not as sensitive to +infrequent items which are not representative of +the collection. +Let us recall that, in ecology, the true diversity, +or diversity index of order k for an ecosystem with +N groups or species, is defined as +D[k] = +� N +� +n=1 +pk +n +� +1 +1−k +(1) +where pn is the probability or relative abundance +of the n-th class, and the parameter k determines +the relative weight of frequent versus infrequent +groups: the larger k is, the less significant rare +species are. +There is therefore a family of indices D[k], the +Shannon index (k = 1) and the Simpson index +(k = 2) among the most popular [12]. Although +the parameter k influences the value of the diver- +sity obtained, the exact choice is not critical when +the objective is to compare diversities at differ- +ent locations or time intervals. In particular, when +addressing digital library data and metadata, k = +1 becomes a natural choice, as D[1] can be easily +connected to the entropy of a source [13], defined +in information theory as +H = − +N +� +n=1 +pn log pn +It is thus not difficult to prove that, as k +approaches 1, one obtains D[1] = exp(H). We also +note that k = 0 leads to the richness R of the +sample. +In this paper we will explore the applicability +of diversity indices to analyzing data (Section 2) +and metadata (Section 3) produced by digital +libraries. Our comparison between libraries will be +based on linked open data collections [1] published +by libraries, as they provide an open benchmark. +2 Lexical diversity +The number M of entries in its vocabulary, also +known as the number of token types, provides +an indication of the lexical diversity of a docu- +ment. The number of token types depends, how- +ever, on the document length, and M shows a +monotonous growth with the number n ≤ N of +tokens processed, N being the document length +(see Figure 1). This unbounded growth is con- +sistent with the well known fact that tokens in +a collection approximately follow a Zipfean dis- +tribution [11]. However, this impedes a direct +comparison of texts based on the size of the +vocabulary used. +The number of token types in the plots can +be accurately approximated by a power function +Cnα with only two parameters: the scale C and +the exponent α. The parameters that best fit the +examples can be found in Table 1, and they have +been used to draw the lines in Figure 1, which +closely follow the data points. +C +α +Los pazos de Ulloa +6.7 +0.68 +Do˜na Perfecta +6.9 +0.66 +La Galatea +11.1 +0.59 +Table 1 Optimal parameters for the lines Cnα depicted +in Figure 1. +A potential advantage of diversity indices is +that they consist of a single finite value with +an intuitive interpretation. The diversity of types +can be calculated exactly if the underlying prob- +ability distribution of the vocabulary is known +(and stationary), but, in practice, the probabili- +ties must be estimated from a text sample using +the observed frequencies instead. As the accuracy +of the estimation increases with the text length, +the result will converge to the true value as the +number of tokens grows. In the most common situ- +ation, however, the sample size is not large enough +to approximate the asymptotic value: as shown +in Figure 2, the Shannon diversity index is usu- +ally still growing when the end of the document is +reached. +The diversity plots in Figure 2 call for a sat- +urating function to model the observed shape. A +function which has been traditionally used to esti- +mate biodiversity from samples of variable size [4] +is the saturating exponential +∆M1(n) = D (1 − eαn), +(2) +which involves only two parameters, the exponent +α and the asymptotic value D of the diversity +index. + +Springer Nature 2021 LATEX template +Measuring the diversity of metadata +3 +Fig. 1 Vocabulary size as a function of the number of tokens read for three novels: Los pazos de Ulloa by Emilia Pardo +Baz´an, Do˜na Perfecta by Benito P´erez Gald´os and La Galatea by Miguel de Cervantes Saavedra. +A second traditional asymptotic model [4] for +species accumulation curves is the two-parameter +function +∆M2(n) = D +n +n + c. +(3) +In our experiments, when models M1 and +M2 were extrapolated, they usually underesti- +mated the diversity of larger samples. We there- +fore investigated additional saturating functions, +in particular, a generalized quotient of monomials +∆M3(n) = D +�n + b +n + c +� +, +(4) +and the powered quotient +∆M4(n) = D +� +n +n + c +�α +. +(5) +We note that in all models, D is the asymptotic +value, that is, the true diversity index. +When ten thousand tokens were used to +extrapolate the curve for larger values, the results +showed that model M4 consistently outperformed +the others (see Figure 3). It can be argued that, +given the high accuracy of the predictions, the +extrapolated diversity computed by model M4 +(the value of parameter D) can be used to compare +the lexical diversity of texts or that of collections +labeled by author, genre or historical period. +Our results show that the value predicted with +model M4 does not depend on the size of the sam- +ple text. As an illustration, Figure 4 shows the +lexical diversity of works by a prolific author (Lope +de Vega) as a function of the text length. The vari- +ability we found could be associated with the style +of the work (for example, works with rhyming tend +to exhibit higher diversity), but the diversity has +no significant correlation with the length of the +work (Pearson’s R ≃ −0.08). +3 Metadata diversity +3.1 Catalographic records +Diversity indices can be also employed to ana- +lyze the catalographic metadata created by dig- +ital libraries. For example, Figure 5 shows the +richness and diversity of book authors in the +catalogs of three libraries which have published +comprehensive collections of catalographic data + +Vocabulary size +Dona Perfecta +La Galatea +14000 +Los pazos de Ulloa +12000 +thousands of types +10000 +8000 +6000 +4000 +2000 +0 +20 +40 +60 +80 +100 +120 +thousands of tokensSpringer Nature 2021 LATEX template +4 +Measuring the diversity of metadata +Fig. 2 Shannon diversity index for the works presented in figure 1. +using open licenses: a large library (Library of +Congress, LoC1), a medium-sized library (Uni- +versiteitsbibliotheek Gent, UGent2), and a small +library (Biblioteca Virtual Miguel de Cervantes, +BVC3). +The richness and diversity lines show a mono- +tonic growth over time with no indication that a +plateau could be reached soon. The smaller ratio +between diversity and richness for the BVC library +(about 33%) in comparison to the ratio for the +LoC and UGent collections (52–54%) is a reflec- +tion of its narrower scope—the BVC focuses on +Hispanic literature and history—which shows a +reduced fraction of the authors providing a vast +contribution to the catalog. Indeed, the average +number of items per author in the BVC collection +is µ = 4.9, while this average is lower for the LoC +(µ = 2.5) and UGent library (µ = 2.1). +We also investigated whether the coverage of +topics in a digital library remains stable, serv- +ing a specialized audience, or whether it tends to +1Library of Congress full book records: www.loc.gov/item/ +2020445551 +2University of Gent book records: lib.ugent.be/info/exports +3Miguel de Cervantes book records: data.cervantesvirtual. +com/datasets +cover a wider spectrum. Figure 6 shows the trends +when the complete descriptor of the subject head- +ing field is analyzed and when its content is split +into topical, chronological, geographical, or other +subdivisions (so that, for example, the descriptor +Commerce–History becomes two subjects, Com- +merce and History). +In the samples analyzed, the variety of sub- +jects typically shows a constant growth with time, +both in terms of richness and diversity. However, +this is not the case for the BVC library when +the subjects are decomposed into subdivisions. +This is due, on the one hand, to a more inten- +sive usage of chronological subdivisions. On the +other hand, an inspection of the records reveals +that the library has, after an initial period, pro- +gressively increased the fraction of content within +the fields of history and literature (and, remark- +ably, theater) in Spanish—which now account for +nearly one third of its content. The BVC has thus +recently developed into a more specialized library. +3.2 Linked open data +Over the last decade, cultural heritage institu- +tions have moved towards adopting the semantic + +Diversity of tokens +1100 +1000 +900 + index + diversity +800 +700 +Shannon +600 +500 +Dona Perfecta +La Galatea +400 +Los pazos de Ulloa +0 +20 +40 +60 +80 +100 +120 +thousands of wordsSpringer Nature 2021 LATEX template +Measuring the diversity of metadata +5 +Fig. 3 Predictive power of the models when the initial 10000 tokens are used to identify the optimal parameters. + +Diversity of tokens +Dona Perfecta +1000 +800 +600 +400 +200 +0 +10000 +20000 +30000 +40000 +50000 +60000 +La Galatea +Shannon diversity index +1000 +800 +600 +400 +200 +0 +20000 +40000 +60000 +80000 +100000 +120000 +Los pazos de Ulloa +M1 +1000 +M2 +M3 +800 +M4 +600 +400 +200 +0 +20 +40 +60 +80 +thousands of wordsSpringer Nature 2021 LATEX template +6 +Measuring the diversity of metadata +Fig. 4 Shannon diversity index of books by Lope de Vega. +web [2] and linked open data concepts by using the +W3C Resource Description Framework to express +semantic relationships [16] and the SPARQL [15] +language to query them. RDF describes resources +(the content of a library) by categorizing them in +classes (such as person, work or name) and uses +properties (such as author) to express relation- +ships between resources. Both resources and prop- +erties are identified by URIs (Uniform Resource +Identifiers): for example, a triple (X, P, Y ) can link +the identifier of a person X to the identifier of a +name Y connected by the property P, where the +meaning of URI P is has name. Analogously, a +triple of the form (X, rdf : type, Z) declares X to +belong to class Z. +Libraries have progressively adapted their cat- +alogs [14] to facilitate the publication of Linked +Open Data (LOD) repositories. As shown in +Table 2, they have used a variety of vocabularies +for the definition of RDF classes and properties, +however. The repositories have also been made +available in various forms, which include pub- +lic SPARQL endpoints, OAI-PMH interfaces and +even open-access dump files.4 +In order to test the application of diver- +sity indices to LOD, data shown in Table 2 +were retrieved from these repositories which dis- +tribute them with open licenses and via a public +SPARQL endpoint. We note that these end- +points may not always reflect the current situation +of the libraries.5 The harvesting was performed +with simple scripts,6 such as those presented in +Appendix A. +The diversity D and richness R of the resources +was computed, as well as the diversity to rich- +ness ratio, which provides an indication of how +effective the usage of the available tags is. As +shown in Table 3, some libraries, such as the Aus- +trian National Library (AT), the National Library +of Finland (FI) and the Koninklijke Bibliotheek +(KB) employ vocabularies with a small number of +4http://www.openarchives.org/pmh +5For example, as of March 2022, the Europeana SPARQL +endpoint has not been updated since July 2017. +6Some repositories implement a timeout limit for the down- +loads. In such cases, partitioned queries were needed to retrieve +all the information. + +1100 +1000 +Shannon diversity index +900 +800 +700 +600 +10000 +12500 +15000 +17500 +20000 +22500 +25000 +thousands of wordsSpringer Nature 2021 LATEX template +Measuring the diversity of metadata +7 +Fig. 5 Cumulative number of authors and Shannon diversity of the authors in the catalog as a function of the year the +MARC record entered the catalog. +Table 2 Linked Open Data repositories published by libraries. +Institution +Vocabularies +URL +Austrian National Library +edm bibframe rda +labs.onb.ac.at/en/dataset/lod +Biblioteca Nacional de Espa˜na +frbr +datos.bne.es +Biblioteca Virtual M. de Cervantes +rda +data.cervantesvirtual.com +Biblioth`eque nat. de France +frbr +data.bnf.fr +Biblioth`eque nat. du Luxembourg +xml +data.bnl.lu +British National Bibliography +bibo +bnb.data.bl.uk +Europeana +edm +pro.europeana.eu/page/sparql +Deutsche Nationalbibliothek +bibframe +www.dnb.de/EN/lds +Library of Congress +bibframe +id.loc.gov +National Library of Finland +Schema.org bibframe +data.nationallibrary.fi +Koninklijke Bibliotheek +Schema.org lrm +data.bibliotheken.nl +classes and properties. In contrast, the National +Library of France (BNF) and the National Library +of Spain (BNE) describe their resources in terms of +the richer FRBR and RDA vocabularies. The BNF +also employs a proprietary vocabulary to describe +the roles of creators which contains over 500 cat- +egories. Since they are not uniformly used, this +leads to a lower D/R ratio. The British National +Bibliography (BNB) is an intermediate case, as it +essentially employs the BIBO vocabulary which +contains 33 classes and 88 properties. +Although there is a moderate positive corre- +lation between the diversity of classes and the + +Authors in the catalogue (LoC) +1e6 +richness +3.5 +diversity +3.0 +2.5 +2.0 +1.5 +1.0 +0.5 +0.0 +1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Authors in the catalogue (UGent) +richness +500000 +diversity +400000 +300000 +200000 +100000 +0 +2000 +2005 +2010 +2015 +2020Authors in the catalogue (BvC) +richness +diversity +17500 +15000 +12500 +10000 +7500 +5000 +2500 +2000 +2005 +2010 +2015 +2020Springer Nature 2021 LATEX template +8 +Measuring the diversity of metadata +Fig. 6 Cumulative richness and Shannon diversity index of the subjects in the catalog. Left: complete subject headings. +Right: subject heading subdivisions. Note the specific scales used for richness. +diversity of properties employed in each collec- +tion (see Figure 7), some libraries show a finer +granularity of classes while other employ a higher +variety of properties. +4 Conclusions +Diversity indices provide a complementary view of +the variety of the groups in a collection of data. In +contrast to richness, diversity is more robust than +the sample size as it gives less weight to classes +with a smaller number of occurrences. +When lexical content is analyzed, the diver- +sity of words approaches an asymptotic value +which depends on the author and genre of the +works. This value can be obtained by extrapo- +lating the observed values with a simple model + +Diversity of subject headings (UGent) +richness / 2 +250000 +diversity +200000 +150000 +100000 +50000 +0 +2000 +2005 +2010 +2015 +2020Diversity of sh subfields (UGent) +17500 +richness / 10 +diversity +15000 +12500 +10000 +7500 +5000 +2500 +0 +2000 +2005 +2010 +2015 +2020Diversity of subject headings (BvC) +richness / 2 +6000 +diversity +5000 +4000 +3000 +2000 +1000 +2000 +2005 +2010 +2015 +2020Diversity of sh subfields (BVC) +richness / 10 +diversity +700 +600 +500 +400 +300 +200 +100 +2000 +2005 +2010 +2015 +2020Diversity of subject headings (LoC) +1e6 +richness / 2 +2.00 +diversity +1.75 +1.50 +1.25 +1.00 +0.75 +0.50 +0.25 +0.00 +1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Diversity of sh subfields (LoC) +10000 +richness / 50 +diversity +8000 +6000 +4000 +2000 +0Springer Nature 2021 LATEX template +Measuring the diversity of metadata +9 +Resource type +class +property +host +D +R +D/R +D +R +D/R +AT +2.1 +5 +0.42 +10.7 +22 +0.48 +BNB +13.2 +33 +0.40 +26.6 +88 +0.30 +BNE +3.8 +16 +0.24 +50.9 +189 +0.27 +BNF +6.9 +26 +0.27 +55.5 +791 +0.07 +BVC +6.6 +27 +0.24 +32.0 +165 +0.19 +EU +5.1 +11 +0.46 +37.1 +115 +0.32 +FI +7.0 +12 +0.59 +17.3 +35 +0.49 +KB +3.9 +12 +0.32 +14.6 +23 +0.64 +Table 3 Diversity D, richness R and diversity-richness rate D/R of the resources contained in linked open data. +Fig. 7 Shannon diversity of classes and properties in linked open data published by libraries. +involving only three free parameters. The extrap- +olation proves stable with respect to the size of +the sample. +As regards metadata, diversity indices can +be used to visualize the trends, for example, in +creator or subject coverage. The rate between +diversity and richness also proves useful to com- +pare the effective usage of the available descriptors +(classes and properties) to describe resources in +the semantic data (linked open data collections) +published by digital libraries. +The Python scripts employed for the analy- +sis included in this paper have been published as +open-access software in [3]. +Acknowledgments. +We thank Frank Vande- +pitte and Patrick Hochstenbach from the Ghent +University Library for their kind assistance in +understanding the library catalographic records. +Appendix A +SPARQL +queries + +Diversity of linked open data collections +60 +BNF +BNE +50 +diversity of properties +40 +EU +BVC +30 +BNB +20 +FT +KB +AT +10 +2 +4 +6 +8 +10 +12 +14 +diversity of classesSpringer Nature 2021 LATEX template +10 +Measuring the diversity of metadata +Listing 1 Query used to retrieve all classes and the +number of resources per class in a LOD repository. +SELECT ? c l a s s +(COUNT(? s ) AS ?count) +WHERE { +? s a ? c l a s s +} +GROUP BY ? c l a s s +Listing 2 Query retrieving external repositories linked +from a specific LOD repository and the number of links to +each one. +SELECT ?hostname (COUNT(? s ) AS ?count) +WHERE{ +? s owl : sameAs ?same +. +bind ( +s t r b e f o r e ( s t r a f t e r ( +s t r (? same ) , ”//” ) , ”/” ) +AS ?hostname ) +} +GROUP BY ?hostname +References +[1] Berners-Lee +T +(2006) +Linked +data. +URL +https://www.w3.org/DesignIssues/ +LinkedData.html +[2] Berners-Lee T, Hendler J, Lassila O (2001) +The Semantic Web. Scientific American 284 +[3] Carrasco RC, Candela G, Such MM (2022) +rccarrasco/dl diversity: Initial release. https: +//doi.org/10.5281/zenodo.6389967, +URL +https://doi.org/10.5281/zenodo.6389967 +[4] Colwell RK, Coddington JA (1994) Estimat- +ing terrestrial biodiversity through extrapola- +tion. Philosophical Transactions of the Royal +Society of London Series B: Biological Sci- +ences 345(1311):101–118. https://doi.org/10. +1098/rstb.1994.0091, URL https://doi.org/ +10.1098/rstb.1994.0091 +[5] Hill MO (1973) Diversity and evenness: A +unifying notation and its consequences. Ecol- +ogy 54(2):427–432. https://doi.org/10.2307/ +1934352 +[6] Jarvis +S +(2013) +Capturing +the +diversity +in +lexical +diversity. +Language +Learning +63(s1):87–106. +https://doi.org/10.1111/j. +1467-9922.2012.00739.x +[7] Kub´at M, Miliˇcka J (2013) Vocabulary rich- +ness measure in genres. Journal of Quanti- +tative Linguistics 20(4):339–349. https://doi. +org/10.1080/09296174.2013.830552 +[8] Kyle K, Crossley SA, Jarvis S (2021) Assess- +ing the validity of lexical diversity indices +using direct judgements. Language Assess- +ment Quarterly 18(2):154–170. https://doi. +org/10.1080/15434303.2020.1844205 +[9] McCarthy PM, Jarvis S (2010) MTLD, vocd- +d, and HD-d: A validation study of sophisti- +cated approaches to lexical diversity assess- +ment. Behavior Research Methods 42(2):381– +392. https://doi.org/10.3758/brm.42.2.381 +[10] McKee G, Malvern D, Richards B (2000) +Measuring vocabulary diversity using ded- +icated +software. +Literary +and +Linguistic +Computing 15(3):323–338. https://doi.org/ +10.1093/llc/15.3.323 +[11] Piantadosi ST (2014) Zipf’s word frequency +law in natural language: a critical review and +future directions. Psychonomic bulletin & +review 21:1112–30. https://doi.org/10.3758/ +s13423-014-0585-6 +[12] Roswell +M, +Dushoff +J, +Winfree +R +(2021) A conceptual guide to measuring +species +diversity. +Oikos +130(3):321–338. +https://doi.org/10.1111/oik.07202, +URL +https://onlinelibrary.wiley.com/doi/abs/10. +1111/oik.07202 +[13] Shannon CE (1948) A mathematical theory +of communication. The Bell System Techni- +cal Journal 27(3):379–423. https://doi.org/ +10.1002/j.1538-7305.1948.tb01338.x +[14] Smith-Yoshimura K (2020) Transitioning to +the next generation of metadata. https://doi. +org/https://doi.org/10.25333/rqgd-b343 +[15] World +Wide +Web +Consortium +(2013) +SPARQL +query +language +for +RDF. +URL +https://www.w3.org/TR/ +sparql11-overview/ +[16] World Wide Web Consortium (2014) RDF +1.1 concepts and abstract syntax. URL https: + +Springer Nature 2021 LATEX template +Measuring the diversity of metadata +11 +//www.w3.org/TR/rdf11-concepts/ + diff --git a/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..86f009ae6ba8ac1e91ec85f09d1cdc6097937379 --- /dev/null +++ b/vdAzT4oBgHgl3EQfP_uD/content/tmp_files/load_file.txt @@ -0,0 +1,386 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf,len=385 +page_content='Springer Nature 2021 LATEX template Measuring the diversity of data and metadata in digital libraries Rafael C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Carrasco1*, Gustavo Candela1 and Manuel Marco-Such1 1*Departmento de Lenguajes y Sistemas Inform´aticos, Universidad de Alicante, Carretera San Vicente del Raspeig s/n, San Vicent del Raspeig, 03690, Alicante, Spain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Corresponding author(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' E-mail(s): carrasco@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Contributing authors: gcandela@ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' marco@dlsi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='ua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='es;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Abstract Diversity indices have been traditionally used to capture the biodiversity of ecosystems by measuring the effective number of species or groups of species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In contrast to abundance, which is correlated with the amount of data available, diversity indices provide a more robust indicator on the variability of individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' These types of indices can be employed in the context of digital libraries to identify trends in the distribution of topics, com- pare the lexica employed by different authors or analyze the coverage of semantic metadata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Keywords: Metadata, Digital Libraries, Open Data, Collections as Data 1 Introduction Richness, usually defined as the number of species present in an ecosystem, provides a limited picture of its biodiversity as it weights all groups equally, regardless their relative abundances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In contrast, diversity indices [5] are numerical estimators that measure both richness and evenness by giving more relevance to abundant species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' They there- fore provide an effective number of species which is more robust than the sample size, due to the smaller contribution of rare, possibly undetected, cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As digital libraries become more readily avail- able, there is an increasing need to explore which bibliometric measures could make their features easier to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' It has been argued [8] that diversity indices could effectively disentangle the correlation between richness and data volume.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The purpose of this paper is therefore to analyze how diversity indices could assist researchers and professionals in evaluating the lexical diversity of the content as well as the metadata coverage in digital collections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As regards textual content, the type-token ratio (TTR) has been traditionally employed to mea- sure the lexical diversity of documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The TTR is computed as the number of different words (types) divided by the number of words (tokens) in the text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' For example, previous works com- pare different approaches, including MLTD [9] and vocd [10], to evaluate TTR and its variability within a sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Some researchers [7] have also explored whether genres could be characterized by specific TTR probability distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Previous research has suggested applying diversity indices to evaluate the lexical richness of documents [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' But other features of digital libraries could also benefit from analysis using diversity concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' For example, the local and 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='01193v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='DL] 3 Jan 2023 Springer Nature 2021 LATEX template 2 Measuring the diversity of metadata temporal variations in the coverage of topics or authors could be better examined by computing diversity indices, as they are not as sensitive to infrequent items which are not representative of the collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Let us recall that, in ecology, the true diversity, or diversity index of order k for an ecosystem with N groups or species, is defined as D[k] = � N � n=1 pk n � 1 1−k (1) where pn is the probability or relative abundance of the n-th class, and the parameter k determines the relative weight of frequent versus infrequent groups: the larger k is, the less significant rare species are.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' There is therefore a family of indices D[k], the Shannon index (k = 1) and the Simpson index (k = 2) among the most popular [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Although the parameter k influences the value of the diver- sity obtained, the exact choice is not critical when the objective is to compare diversities at differ- ent locations or time intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In particular, when addressing digital library data and metadata, k = 1 becomes a natural choice, as D[1] can be easily connected to the entropy of a source [13], defined in information theory as H = − N � n=1 pn log pn It is thus not difficult to prove that, as k approaches 1, one obtains D[1] = exp(H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' We also note that k = 0 leads to the richness R of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In this paper we will explore the applicability of diversity indices to analyzing data (Section 2) and metadata (Section 3) produced by digital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Our comparison between libraries will be based on linked open data collections [1] published by libraries, as they provide an open benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 2 Lexical diversity The number M of entries in its vocabulary, also known as the number of token types, provides an indication of the lexical diversity of a docu- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The number of token types depends, how- ever, on the document length, and M shows a monotonous growth with the number n ≤ N of tokens processed, N being the document length (see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' This unbounded growth is con- sistent with the well known fact that tokens in a collection approximately follow a Zipfean dis- tribution [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' However, this impedes a direct comparison of texts based on the size of the vocabulary used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The number of token types in the plots can be accurately approximated by a power function Cnα with only two parameters: the scale C and the exponent α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The parameters that best fit the examples can be found in Table 1, and they have been used to draw the lines in Figure 1, which closely follow the data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' C α Los pazos de Ulloa 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='68 Do˜na Perfecta 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='66 La Galatea 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='59 Table 1 Optimal parameters for the lines Cnα depicted in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' A potential advantage of diversity indices is that they consist of a single finite value with an intuitive interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The diversity of types can be calculated exactly if the underlying prob- ability distribution of the vocabulary is known (and stationary), but, in practice, the probabili- ties must be estimated from a text sample using the observed frequencies instead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As the accuracy of the estimation increases with the text length, the result will converge to the true value as the number of tokens grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In the most common situ- ation, however, the sample size is not large enough to approximate the asymptotic value: as shown in Figure 2, the Shannon diversity index is usu- ally still growing when the end of the document is reached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The diversity plots in Figure 2 call for a sat- urating function to model the observed shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' A function which has been traditionally used to esti- mate biodiversity from samples of variable size [4] is the saturating exponential ∆M1(n) = D (1 − eαn), (2) which involves only two parameters, the exponent α and the asymptotic value D of the diversity index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Springer Nature 2021 LATEX template Measuring the diversity of metadata 3 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 1 Vocabulary size as a function of the number of tokens read for three novels: Los pazos de Ulloa by Emilia Pardo Baz´an, Do˜na Perfecta by Benito P´erez Gald´os and La Galatea by Miguel de Cervantes Saavedra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' A second traditional asymptotic model [4] for species accumulation curves is the two-parameter function ∆M2(n) = D n n + c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' (3) In our experiments, when models M1 and M2 were extrapolated, they usually underesti- mated the diversity of larger samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' We there- fore investigated additional saturating functions, in particular, a generalized quotient of monomials ∆M3(n) = D �n + b n + c � , (4) and the powered quotient ∆M4(n) = D � n n + c �α .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' (5) We note that in all models, D is the asymptotic value, that is, the true diversity index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' When ten thousand tokens were used to extrapolate the curve for larger values, the results showed that model M4 consistently outperformed the others (see Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' It can be argued that, given the high accuracy of the predictions, the extrapolated diversity computed by model M4 (the value of parameter D) can be used to compare the lexical diversity of texts or that of collections labeled by author, genre or historical period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Our results show that the value predicted with model M4 does not depend on the size of the sam- ple text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As an illustration, Figure 4 shows the lexical diversity of works by a prolific author (Lope de Vega) as a function of the text length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The vari- ability we found could be associated with the style of the work (for example, works with rhyming tend to exhibit higher diversity), but the diversity has no significant correlation with the length of the work (Pearson’s R ≃ −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='08).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 3 Metadata diversity 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 Catalographic records Diversity indices can be also employed to ana- lyze the catalographic metadata created by dig- ital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' For example, Figure 5 shows the richness and diversity of book authors in the catalogs of three libraries which have published comprehensive collections of catalographic data Vocabulary size Dona Perfecta La Galatea 14000 Los pazos de Ulloa 12000 thousands of types 10000 8000 6000 4000 2000 0 20 40 60 80 100 120 thousands of tokensSpringer Nature 2021 LATEX template 4 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 2 Shannon diversity index for the works presented in figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' using open licenses: a large library (Library of Congress, LoC1), a medium-sized library (Uni- versiteitsbibliotheek Gent, UGent2), and a small library (Biblioteca Virtual Miguel de Cervantes, BVC3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The richness and diversity lines show a mono- tonic growth over time with no indication that a plateau could be reached soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The smaller ratio between diversity and richness for the BVC library (about 33%) in comparison to the ratio for the LoC and UGent collections (52–54%) is a reflec- tion of its narrower scope—the BVC focuses on Hispanic literature and history—which shows a reduced fraction of the authors providing a vast contribution to the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Indeed, the average number of items per author in the BVC collection is µ = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='9, while this average is lower for the LoC (µ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5) and UGent library (µ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' We also investigated whether the coverage of topics in a digital library remains stable, serv- ing a specialized audience, or whether it tends to 1Library of Congress full book records: www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='gov/item/ 2020445551 2University of Gent book records: lib.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='ugent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='be/info/exports 3Miguel de Cervantes book records: data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='cervantesvirtual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' com/datasets cover a wider spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Figure 6 shows the trends when the complete descriptor of the subject head- ing field is analyzed and when its content is split into topical, chronological, geographical, or other subdivisions (so that, for example, the descriptor Commerce–History becomes two subjects, Com- merce and History).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In the samples analyzed, the variety of sub- jects typically shows a constant growth with time, both in terms of richness and diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' However, this is not the case for the BVC library when the subjects are decomposed into subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' This is due, on the one hand, to a more inten- sive usage of chronological subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' On the other hand, an inspection of the records reveals that the library has, after an initial period, pro- gressively increased the fraction of content within the fields of history and literature (and, remark- ably, theater) in Spanish—which now account for nearly one third of its content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The BVC has thus recently developed into a more specialized library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2 Linked open data Over the last decade, cultural heritage institu- tions have moved towards adopting the semantic Diversity of tokens 1100 1000 900 index diversity 800 700 Shannon 600 500 Dona Perfecta La Galatea 400 Los pazos de Ulloa 0 20 40 60 80 100 120 thousands of wordsSpringer Nature 2021 LATEX template Measuring the diversity of metadata 5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 3 Predictive power of the models when the initial 10000 tokens are used to identify the optimal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Diversity of tokens Dona Perfecta 1000 800 600 400 200 0 10000 20000 30000 40000 50000 60000 La Galatea Shannon diversity index 1000 800 600 400 200 0 20000 40000 60000 80000 100000 120000 Los pazos de Ulloa M1 1000 M2 M3 800 M4 600 400 200 0 20 40 60 80 thousands of wordsSpringer Nature 2021 LATEX template 6 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 4 Shannon diversity index of books by Lope de Vega.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' web [2] and linked open data concepts by using the W3C Resource Description Framework to express semantic relationships [16] and the SPARQL [15] language to query them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' RDF describes resources (the content of a library) by categorizing them in classes (such as person, work or name) and uses properties (such as author) to express relation- ships between resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Both resources and prop- erties are identified by URIs (Uniform Resource Identifiers): for example, a triple (X, P, Y ) can link the identifier of a person X to the identifier of a name Y connected by the property P, where the meaning of URI P is has name.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Analogously, a triple of the form (X, rdf : type, Z) declares X to belong to class Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Libraries have progressively adapted their cat- alogs [14] to facilitate the publication of Linked Open Data (LOD) repositories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As shown in Table 2, they have used a variety of vocabularies for the definition of RDF classes and properties, however.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The repositories have also been made available in various forms, which include pub- lic SPARQL endpoints, OAI-PMH interfaces and even open-access dump files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='4 In order to test the application of diver- sity indices to LOD, data shown in Table 2 were retrieved from these repositories which dis- tribute them with open licenses and via a public SPARQL endpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' We note that these end- points may not always reflect the current situation of the libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 The harvesting was performed with simple scripts,6 such as those presented in Appendix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The diversity D and richness R of the resources was computed, as well as the diversity to rich- ness ratio, which provides an indication of how effective the usage of the available tags is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As shown in Table 3, some libraries, such as the Aus- trian National Library (AT), the National Library of Finland (FI) and the Koninklijke Bibliotheek (KB) employ vocabularies with a small number of 4http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='openarchives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/pmh 5For example, as of March 2022, the Europeana SPARQL endpoint has not been updated since July 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 6Some repositories implement a timeout limit for the down- loads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In such cases, partitioned queries were needed to retrieve all the information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 1100 1000 Shannon diversity index 900 800 700 600 10000 12500 15000 17500 20000 22500 25000 thousands of wordsSpringer Nature 2021 LATEX template Measuring the diversity of metadata 7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 5 Cumulative number of authors and Shannon diversity of the authors in the catalog as a function of the year the MARC record entered the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Table 2 Linked Open Data repositories published by libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Institution Vocabularies URL Austrian National Library edm bibframe rda labs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='onb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='at/en/dataset/lod Biblioteca Nacional de Espa˜na frbr datos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='bne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='es Biblioteca Virtual M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' de Cervantes rda data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='cervantesvirtual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='com Biblioth`eque nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' de France frbr data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='bnf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='fr Biblioth`eque nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' du Luxembourg xml data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='bnl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='lu British National Bibliography bibo bnb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='bl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='uk Europeana edm pro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='europeana.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='eu/page/sparql Deutsche Nationalbibliothek bibframe www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='dnb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='de/EN/lds Library of Congress bibframe id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='gov National Library of Finland Schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org bibframe data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='nationallibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='fi Koninklijke Bibliotheek Schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org lrm data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='bibliotheken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='nl classes and properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In contrast, the National Library of France (BNF) and the National Library of Spain (BNE) describe their resources in terms of the richer FRBR and RDA vocabularies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The BNF also employs a proprietary vocabulary to describe the roles of creators which contains over 500 cat- egories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Since they are not uniformly used, this leads to a lower D/R ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The British National Bibliography (BNB) is an intermediate case, as it essentially employs the BIBO vocabulary which contains 33 classes and 88 properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Although there is a moderate positive corre- lation between the diversity of classes and the Authors in the catalogue (LoC) 1e6 richness 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 diversity 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Authors in the catalogue (UGent) richness 500000 diversity 400000 300000 200000 100000 0 2000 2005 2010 2015 2020Authors in the catalogue (BvC) richness diversity 17500 15000 12500 10000 7500 5000 2500 2000 2005 2010 2015 2020Springer Nature 2021 LATEX template 8 Measuring the diversity of metadata Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 6 Cumulative richness and Shannon diversity index of the subjects in the catalog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Left: complete subject headings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Right: subject heading subdivisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Note the specific scales used for richness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' diversity of properties employed in each collec- tion (see Figure 7), some libraries show a finer granularity of classes while other employ a higher variety of properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 4 Conclusions Diversity indices provide a complementary view of the variety of the groups in a collection of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' In contrast to richness, diversity is more robust than the sample size as it gives less weight to classes with a smaller number of occurrences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' When lexical content is analyzed, the diver- sity of words approaches an asymptotic value which depends on the author and genre of the works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' This value can be obtained by extrapo- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='lating the observed values with a simple model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='Diversity of subject headings (UGent) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='250000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='200000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='150000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='100000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='50000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2020Diversity of sh subfields (UGent) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='17500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='richness / 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='15000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='12500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='10000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='7500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2020Diversity of subject headings (BvC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='4000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='3000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2020Diversity of sh subfields (BVC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='richness / 10 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='diversity ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='700 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='600 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='500 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='400 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='300 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='200 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='100 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2000 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2005 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2010 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2015 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2020Diversity of subject headings (LoC) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1e6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='richness / 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='00 diversity 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='00 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015Diversity of sh subfields (LoC) 10000 richness / 50 diversity 8000 6000 4000 2000 0Springer Nature 2021 LATEX template Measuring the diversity of metadata 9 Resource type class property host D R D/R D R D/R AT 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='42 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='7 22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='48 BNB 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2 33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='40 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6 88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='30 BNE 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='8 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='24 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='9 189 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='27 BNF 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='9 26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='27 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5 791 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='07 BVC 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6 27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 165 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='19 EU 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='46 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 115 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='32 FI 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='59 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='3 35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='49 KB 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='9 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='32 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6 23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='64 Table 3 Diversity D, richness R and diversity-richness rate D/R of the resources contained in linked open data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 7 Shannon diversity of classes and properties in linked open data published by libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' involving only three free parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The extrap- olation proves stable with respect to the size of the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' As regards metadata, diversity indices can be used to visualize the trends, for example, in creator or subject coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The rate between diversity and richness also proves useful to com- pare the effective usage of the available descriptors (classes and properties) to describe resources in the semantic data (linked open data collections) published by digital libraries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The Python scripts employed for the analy- sis included in this paper have been published as open-access software in [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' We thank Frank Vande- pitte and Patrick Hochstenbach from the Ghent University Library for their kind assistance in understanding the library catalographic records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Appendix A SPARQL queries Diversity of linked open data collections 60 BNF BNE 50 diversity of properties 40 EU BVC 30 BNB 20 FT KB AT 10 2 4 6 8 10 12 14 diversity of classesSpringer Nature 2021 LATEX template 10 Measuring the diversity of metadata Listing 1 Query used to retrieve all classes and the number of resources per class in a LOD repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' SELECT ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' c l a s s (COUNT(?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' s ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='count) WHERE { ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' s a ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' c l a s s } GROUP BY ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' c l a s s Listing 2 Query retrieving external repositories linked from a specific LOD repository and the number of links to each one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' SELECT ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='hostname (COUNT(?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' s ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='count) WHERE{ ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' s owl : sameAs ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='same .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' bind ( s t r b e f o r e ( s t r a f t e r ( s t r (?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' same ) , ”//” ) , ”/” ) AS ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='hostname ) } GROUP BY ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='hostname References [1] Berners-Lee T (2006) Linked data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/DesignIssues/ LinkedData.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='html [2] Berners-Lee T, Hendler J, Lassila O (2001) The Semantic Web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Scientific American 284 [3] Carrasco RC, Candela G, Such MM (2022) rccarrasco/dl diversity: Initial release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6389967, URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='6389967 [4] Colwell RK, Coddington JA (1994) Estimat- ing terrestrial biodiversity through extrapola- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Philosophical Transactions of the Royal Society of London Series B: Biological Sci- ences 345(1311):101–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0091, URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1098/rstb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='0091 [5] Hill MO (1973) Diversity and evenness: A unifying notation and its consequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Ecol- ogy 54(2):427–432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2307/ 1934352 [6] Jarvis S (2013) Capturing the diversity in lexical diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Language Learning 63(s1):87–106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1111/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 1467-9922.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='00739.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='x [7] Kub´at M, Miliˇcka J (2013) Vocabulary rich- ness measure in genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Journal of Quanti- tative Linguistics 20(4):339–349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1080/09296174.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='830552 [8] Kyle K, Crossley SA, Jarvis S (2021) Assess- ing the validity of lexical diversity indices using direct judgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Language Assess- ment Quarterly 18(2):154–170.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1080/15434303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1844205 [9] McCarthy PM, Jarvis S (2010) MTLD, vocd- d, and HD-d: A validation study of sophisti- cated approaches to lexical diversity assess- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Behavior Research Methods 42(2):381– 392.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='3758/brm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='381 [10] McKee G, Malvern D, Richards B (2000) Measuring vocabulary diversity using ded- icated software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Literary and Linguistic Computing 15(3):323–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1093/llc/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='323 [11] Piantadosi ST (2014) Zipf’s word frequency law in natural language: a critical review and future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Psychonomic bulletin & review 21:1112–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='3758/ s13423-014-0585-6 [12] Roswell M, Dushoff J, Winfree R (2021) A conceptual guide to measuring species diversity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' Oikos 130(3):321–338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1111/oik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='07202, URL https://onlinelibrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='wiley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='com/doi/abs/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' 1111/oik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='07202 [13] Shannon CE (1948) A mathematical theory of communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' The Bell System Techni- cal Journal 27(3):379–423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1002/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1538-7305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='tb01338.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='x [14] Smith-Yoshimura K (2020) Transitioning to the next generation of metadata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='25333/rqgd-b343 [15] World Wide Web Consortium (2013) SPARQL query language for RDF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/TR/ sparql11-overview/ [16] World Wide Web Consortium (2014) RDF 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='1 concepts and abstract syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content=' URL https: Springer Nature 2021 LATEX template Measuring the diversity of metadata 11 //www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='w3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} +page_content='org/TR/rdf11-concepts/' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/vdAzT4oBgHgl3EQfP_uD/content/2301.01193v1.pdf'} diff --git a/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf b/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..61e65e3acf4b75e89884601d2405f0ff28aaa772 --- /dev/null +++ b/w9FRT4oBgHgl3EQfhDe7/content/2301.13582v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8533c0c2b36eb7d68343b1b2644a1f1d94ae9d77b28dbf8b96175965a7bc004a +size 535491 diff --git a/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss b/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..365966038175f8eafd533fde03dfaf19e99328ec --- /dev/null +++ b/ytFKT4oBgHgl3EQfMC3E/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a48b0ff441a22c6547eef930845ccc9c96aa35fb422edf4312ecf767f34a5a8 +size 6815789 diff --git a/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..74ef5295bfababef8e02a3adaddd0618552f964a --- /dev/null +++ b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/2301.00725v1.pdf.txt @@ -0,0 +1,2672 @@ +1 +Learning Invariance from Generated Variance +for Unsupervised Person Re-identification +Hao Chen, Yaohui Wang, Benoit Lagadec, Antitza Dantcheva, Francois Bremond +Abstract—This work focuses on unsupervised representation learning in person re-identification (ReID). Recent self-supervised +contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same +image. However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always +favorable in id-sensitive ReID tasks. In this paper, we propose to replace traditional data augmentation with a generative adversarial +network (GAN) that is targeted to generate augmented views for contrastive learning. A 3D mesh guided person image generator is +proposed to disentangle a person image into id-related and id-unrelated features. Deviating from previous GAN-based ReID methods +that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and +id-related features. We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related +augmentations. By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised +person ReID performance on mainstream large-scale benchmarks. +Index Terms—Person re-identification, image synthesis, representation disentanglement, data augmentation, contrastive learning +! +1 +INTRODUCTION +G +IVEN an image of a target person, a person re- +identification (ReID) system [1], [2] aims at matching +images of the same person across non-overlapping cameras. +With the help of human-annotated labels, supervised per- +son ReID methods [3], [4] have yielded impressive results. +However, there usually exist strong domain gaps between +different domains, such as illumination condition, camera +property and scenario variation. As shown in previous +methods [5], [6], a ReID model trained on a specific domain +is hard to generalize to other domains. One straightforward +solution is to annotate and re-train the ReID model in a new +domain, which is cumbersome and time-consuming for real- +world deployments. Towards an automatic adaptive system, +unsupervised person ReID [7], [8], [9] has attracted increasing +attention in the research community. Compared with su- +pervised counterparts, unsupervised methods directly learn +from unlabeled images and therefore entail better scalability +in real-world deployments. +Recent self-supervised contrastive learning studies [10], [11] +have shown promising performance in unsupervised repre- +sentation learning. By maximizing the representation sim- +ilarity between two different views (augmented versions) +of a same image, contrastive methods learn representations +that are invariant to different conditions. In this context, data +augmentation plays a crucial role in mimicking real-world +condition variance. Contrastive learning methods are able +to build more robust representations, given they were pro- +vided with better augmented views. Previous methods gen- +erally consider traditional data augmentation techniques, +• +H. Chen, Y. Wang, A. Dantcheva and F. Bremond are with Inria +and Universit´e Cˆote d’Azur, 2004 Route des Lucioles, 06902 Val- +bonne, France. E-mail: {hao.chen, yaohui.wang, antitza.dantcheva, fran- +cois.bremond}@inria.fr +• +B. Lagadec is with European Systems Integration, 362 Avenue du Cam- +pon, 06110 Le Cannet, France. E-mail: benoit.lagadec@esifrance.net +e.g., random flipping, cropping, color jittering, blurring and +erasing [12]. However, these random augmentation tech- +niques may cause undesirable distortion to crucial identity +information. To overcome this issue, we propose to use a +Generative Adversarial Network (GAN) [13] as an augmen- +tation substitute, as it is able to disentangle a representation +into id-related and id-unrelated features (see Table 1). More +accurate augmented views can be obtained by modifying a +certain factor while preserving other factors. +Previous GAN-based unsupervised ReID methods [14], +[15], [16], [17] often treat unsupervised ReID as an unsu- +pervised domain adaptation task, which attempts to adapt +a model trained on a labeled source domain to an unla- +beled target domain. Under this setting, it is intuitive to +use GAN-based style transfer [18], [19] to generate source +domain images in the style of a target domain. A model +can be re-trained on the generated images in target domain +style with source domain labels. However, unsupervised +domain adaptation performance often strongly relies on +quality and scale of the source domain. Differently, we treat +unsupervised ReID as a contrastive representation learning +task, where the source domain is not mandatory. To this end, +we integrate a generative module and a contrastive module +into a joint learning framework. +For the generative module, we propose a 3D mesh based +generator. Conventional pose transfer methods [20], [21] use +2D pose [22] to guide generation, not preserving body shape +information. 3D mesh recovery [23] jointly estimates body +shape, as well as 3D pose, which conserves more identity +information for unsupervised ReID. We use 3D meshes +to guide the generation, where generated images in new +poses are then used as augmented views in the contrastive +module. +For the contrastive module, we use a clustering al- +gorithm to generate pseudo labels, aimed at maximizing +representation similarity between different views of a same +arXiv:2301.00725v1 [cs.CV] 2 Jan 2023 + +2 +TABLE 1 +Id-related and Id-unrelated factors in a person image. +Id-related +Id-unrelated +cloth color, +pose, view-point, +hair color, texture, +illumination, camera style +body shape +background +pseudo identity. Our model attracts a generated view to +its original view, while repulsing the generated view from +images of different identities. The contrastive module per- +mits an identity encoder to extract view-invariant identity +features, which, in turn, improves the generation quality. +In our previous work [9], GAN-based augmentation +was only conducted on id-unrelated features, which has +been common practice in previous GAN-based ReID meth- +ods [20], [24], [25]. Modifying id-unrelated features allows +for learning identity features that are more invariant to id- +unrelated variations. In this paper, we explore the possibility +of conducting GAN-based augmentation on the id-related +features to further improve the ReID performance. Inspired +by Mixup [26] that interpolates two images to learn a +smoother decision boundary between two classes, we pro- +pose to interpolate disentangled id-related features inside +the generative module, namely Disentangled Mixup (D- +Mixup). As shown in Table 2, if two persons P1 and P2 re- +spectively wear red and yellow clothes, an in-between iden- +tity in orange clothes should be marked as 0.5P1 + 0.5P2. +However, in a dataset, such a person in orange clothes is +normally labeled as a totally different identity P3, which +hinders a network from learning the accurate relationship +between different identities. Compared to traditional image- +level Mixup [26] and feature-level Mixup [27], our proposed +D-Mixup generates more accurate in-between identity im- +ages, which are more suitable for fine-grained person ReID. +In our D-Mixup, we try to make our network understand the +mixed identity 0.5P1 + 0.5P2 is not related to id-unrelated +features (pose and view-point), but only related to id-related +features (cloth color). +To summarize, our contributions include the following: +• +We propose a 3D mesh guided generator to disentan- +gle representations into id-related and id-unrelated +features. Two novel data augmentation techniques +are proposed respectively on id-unrelated and id- +related features. +• +We propose Rotation Contrast and Mixup Contrast +modules to respectively learn invariance from id- +unrelated and id-related augmented views. +• +We propose an enhanced joint generative and con- +trastive learning framework. We comprehensively +investigate how the generative and contrastive mod- +ules mutually promote each other and contribute to +unsupervised ReID performance. +• +Extensive experiments validate the superiority of +proposed GAN-based augmentation over traditional +augmentation for unsupervised person ReID. Our +method achieves new state-of-the-art unsupervised +person ReID performance on mainstream image- +based datasets, including Market-1501, DukeMTMC- +reID and MSMT17. +TABLE 2 +Interpolation results between two random persons P1 and P2 with +image-level Mixup [26], feature-level Mixup (F-Mixup) [27] and our +proposed disentangled Mixup (D-Mixup). To visualize results from +F-Mixup, we follow AMR [28] to train a VAE-GAN for mixed image +reconstruction. Our D-Mixup only interpolates disentangled identity +features in the generation, which alleviates noise from mixed structural +features. +Inputs +Mixup +F-Mixup +D-Mixup +Image +Image +Label +1.0P1 +0.0P1 +0.5P1 +0.5P1 +0.5P1 +0.5P1 +0.0P2 +1.0P2 +0.5P2 +0.5P2 +0.5P2 +0.5P2 +• +Our method can be also applied to video-based +person ReID. Our method significantly outperforms +previous unsupervised video person ReID methods +on MARS and DukeMTMC-VideoReID datasets. +2 +RELATED WORK +2.1 +Contrastive learning +Contrastive learning [29] has shown impressive perfor- +mance for un-/self-supervised representation learning [10], +[11], [30], [31], [32], [33]. Such contrastive methods target +at learning representations that are invariant to different +distortions by attracting positive pairs, while repulsing neg- +ative pairs. For each image, a positive pair can be constituted +by two augmented views, whereas all other images in a +dataset are regarded as negative samples. Contrastive learn- +ing methods benefit from a set of well defined data aug- +mentation techniques, which can mimic real-world image +distortions. For example, MoCo [11] used random cropping, +color jitterring, horizontal flipping and grayscale conversion +to obtain positive view pairs. As an extension, MoCo- +v2 [34] included blurring and stronger color distorsion, +which enhanced the original method. However, most of +data augmentation settings in contrastive learning methods +were designed for general image classification datasets, e.g., +ImageNet [35]. These traditional augmentation techniques +are not always suitable for color-sensitive person ReID, +especially those that introduce strong color distorsion. +2.2 +Data augmentation +As a technique to constitute positive pairs, data augmen- +tation plays an important role in contrastive learning. Re- +cently, GAN and Mixup have provided new approaches for +data augmentation in person ReID. +2.2.1 +GAN-based augmentaion +Zheng et al. +[36] unconditionally generated a lot of un- +labeled person images with DCGAN [37] to enlarge data + +3 +volume for supervised ReID. Following GAN-based meth- +ods were usually conditionally conducted on some factors +from Table 1. 1) Pose: With the guidance of 2D poses, +FD-GAN [20] and PN-GAN [38] generated a target per- +son in new poses to learn pose-irrelevant representations +for single-domain supervised ReID. Similar pose transfer +[21] was then proposed to address unsupervised domain +adaptive (UDA) ReID. 2) Dataset style (illumination): As a +dataset is usually recorded in a uniform illumination condi- +tion, PTGAN [14] and SyRI [15] used CycleGAN [39] to min- +imize the domain gap between different datasets by generat- +ing person images in the style of a target domain. 3) Camera +style: Instead of the general dataset style, CamStyle [24] +transferred images captured from one camera into the style +of another camera, in order to reduce inter-camera style +gaps. Similar method [16] was then applied to UDA ReID. 4) +Background: SBSGAN [40] and CR-GAN [41] respectively +were targeted at removing and switching the background of +a person image to mitigate background influence for UDA +ReID. 5) General structure: By switching global and local +level identity-unrelated features, IS-GAN [42] disentangled +a representation into identity-related and identity-unrelated +features without any concrete guidance. As a concrete guid- +ance, a gray-scaled image contains multiple id-unrelated +factors of a person image, including pose, background +and carrying structures. By recoloring gray-scaled person +images with the color distribution of other images, DG- +Net [25] and DG-Net++ [17] learned disentangled identity +representations invariant to structure factors. Our proposed +3D mesh guided generator shares certain similarity with +pose transfers and DG-Net++. However, both pose transfers +and DG-Net++ lose body shape information, which can be +conserved by 3D meshes. Moreover, as opposed to DG- +Net++, we do not transfer style in a cross-domain manner, +which allows our method to operate without a source do- +main. +2.2.2 +Mixup +Mixup [26] is a simple yet effective data augmentation +technique that interpolates two samples and labels into +one new in-between sample, which encourages a smoother +decision boundary between two classes. The interpolation +can be conducted between two images +[26], [43], two +feature representations [27] and two portions of different +images [44]. Initially proposed for supervised image classi- +fication [26], [43], Mixup has been successfully extended to +semi-supervised learning [45], [46], unsupervised domain +adaptation [47], as well as novel class discovery [48]. Aug- +Mix [49] combines multiple augmented versions of an image +into a mixed image and proves that such technique can +enhance robustness on corrupted data. CAIL [50] applies +image-level Mixup between a source domain image and a +target domain image to create a between-domain person +image, which facilitates cross-domain knowledge transfer +in unsupervised domain adaptive ReID. The above methods +usually interpolate whole images or whole representations, +resulting in noise from overlapping person structures. To +reduce noise from mixed person structures, we propose +to interpolate only disentangled identity features, which is +compatible with our proposed 3D mesh guided GAN. +2.3 +Unsupervised person ReID +Depending on the necessity of a large-scale labeled source +dataset, unsupervised person ReID methods can be roughly +categorized into unsupervised domain adaptive (UDA) and +fully unsupervised ReID. We note that above mentioned +GAN-based unsupervised ReID methods [14], [15], [16], +[17], [21], [41] fall into the setting of UDA ReID. Several +works [51], [52] leveraged semantic attributes to facilitate +the domain adaptation. Another prominent approach has to +do with assigning pseudo labels to unlabeled images and +conducting pseudo label learning [7], [8], [50], [53], [54], +[55], [56]. Pseudo labels can be obtained by existing clus- +tering algorithms, e.g., K-means [8] and DBSCAN [17], [55], +or newly designed pseudo labelling algorithms [53], [56]. +Since the performance of UDA ReID is highly correlated +to the scale and quality of a source domain, recent fully +unsupervised ReID methods have attracted more attention. +Most of previous fully unsupervised methods [57], [58], [59], +[60], [61] were based on pure pseudo label learning. Our +previous method GCL [9] has entailed a hybrid GAN and +pseudo label learning method, which is compatible with +both UDA and fully unsupervised settings. We here propose +a new id-related augmentation D-Mixup, which enhances +our framework to achieve new state-of-the-art performance +under both UDA and fully unsupervised settings. +3 +METHOD +In this paper, we propose an enhanced joint Generative +and Contrastive Learning (GCL+) for unsupervised person +ReID. We define unsupervised ReID as a problem of learn- +ing invariance from self-augmented variance. As illustrated +in Fig. 1. (a), the proposed GCL+ constitutes of two modules: +a generative module that provides GAN-based augmented +views, as well as a contrastive module that learns invariance +from augmented views. These two modules are coupled by +a shared identity encoder. After the joint training, only the +shared identity encoder is conserved for inference. In the +following sections, we proceed to provide details related to +both modules. To facilitate the reading, we include a list of +abbreviations in Supplementary Materials Section C. +3.1 +Generative Module +Our generative module is composed of 4 networks, in- +cluding an identity encoder Eid, a structure encoder Estr, +a decoder G and a discriminator D. Given an unlabeled +person ReID dataset X += {x1, x2, ..., xN}, we use the +prominent algorithm HMR [23] to generate corresponding +3D meshes, which are then used as structure guidance in +the generative module. By recoloring a specific 3D mesh +to reconstruct a real image, a person representation can +be disentangled into identity and structure features. We +conduct data augmentation in two pathways: one on id- +unrelated structure features with rotated meshes, the other +one on identity features with D-Mixup. +3.1.1 +Mesh-guided Rotation (id-unrelated augmentation) +As shown in Fig. 1. (b), given a person image and an +estimated 3D mesh, we denote the 2D projection of the +mesh as original structure sori. To mimic real-world camera + +4 +(a) General Architecture of GCL +𝑥𝑚𝑖𝑥 +′ +𝑥𝑛𝑒𝑤 +′ +Contrastive +Module +(b) Generative Module: ID-unrelated augmentation +Generative +Module +𝑥 +𝑠𝑛𝑒𝑤 +𝐸𝑠𝑡𝑟 +𝑠𝑜𝑟𝑖 +𝐸𝑖𝑑 +𝑥𝑖 +𝐺 +𝑥𝑗 +𝐸𝑖𝑑 +mix +𝑥𝑚𝑖𝑥 +′ +𝐷 +𝐿𝑎𝑑𝑣 +(c) Generative Module: ID-related augmentation +Discriminator +𝐸𝑖𝑑 +𝐸𝑠𝑡𝑟 +𝐺 +𝐷 +Shared identity encoder +Structure encoder +Decoder +𝐿 +Loss +mix +Mixup +𝐸𝑖𝑑 +𝐸𝑠𝑡𝑟 +𝐺 +𝑠𝑜𝑟𝑖 +𝑥 +𝑥𝑜𝑟𝑖 +′ +𝐷 +𝐸𝑠𝑡𝑟 +𝐺 +𝑠𝑛𝑒𝑤 +𝑥𝑛𝑒𝑤 +′ +𝐸𝑖𝑑 +𝐷 +𝐿𝑓𝑒𝑎𝑡 +𝐿𝑖𝑚𝑔 +𝐿𝑎𝑑𝑣 +𝐿𝑎𝑑𝑣 +𝐺 +𝑥𝑜𝑟𝑖 +′′ +𝐷 +𝐿𝑖𝑚𝑔 +𝐿𝑎𝑑𝑣 +𝐿𝑓𝑒𝑎𝑡 +𝐸𝑖𝑑 +(d) Contrastive Module: Rotation Contrast +𝑥𝑛𝑒𝑤 +′ +memory +𝑓𝑝𝑜𝑠 +𝐿𝑣𝑖 +′ +𝑥 +𝐿𝑣𝑖 +𝐸𝑖𝑑 +𝐿𝑣𝑖 +′′ +𝐸𝑖𝑑 +𝑥𝑖 +𝑥𝑗 +(e) Contrastive Module: Mixup Contrast +1 2 3 +mix +Pseudo label +𝐿𝑚𝑖𝑥 +1 2 3 +1 2 3 +𝑥𝑚𝑖𝑥 +′ +Fig. 1. (a) General architecture of GCL+: The framework is composed of a generative module (b, c) and a contrastive module (d, e), which +are coupled by the shared identity encoder Eid. (b) Mesh rotation (id-unrelated augmentation) : The decoder G combines the identity features +encoded by Eid and structure features Estr to generate an augmented view x′ +new with a cycle consistency. (c) D-mixup (id-related augmentation): +The decoder G generates a identity-mixed augmented view x′ +mix with the mixed identity features. (d) Rotation Contrast: Viewpoint-invariance is +enhanced by maximizing the agreement between original Eid(x), synthesized Eid(x′ +new) and memory fpos representations. (e) Mixup Contrast: +A smoother decision boundary can be learnt with x′ +mix and the interpolated pseudo label. +view-point, as shown in Table 3, we rotate the 3D mesh +by 45°, 90°, 135°, 180°, 225°, 270° and 315° and randomly +take one 2D projection from these rotated meshes as a +new structure snew. The unlabeled image is encoded to +identity features by the identity encoder Eid : x → fid, +while both original and new structures are encoded to +structure features by the structure encoder Estr : sori → +fstr(ori), snew → fstr(new). Combining both identity and +structure features, the decoder generates synthesized im- +ages G : (fid, fstr(ori)) → x′ +ori, (fid, fstr(new)) → x′ +new, +where a prime is used to represent generated images. +As we do not have real images in new structures (paired +data), a cycle consistency reconstruction [39] becomes in- +dispensable for the generative module. We encode the +generated image in the new structure x′ +new and decode +once again to get synthesized images in original structures +G(Eid(x′ +new), sori) → x′′ +ori, where double primes denote +cycle-generated images. We calculate a ℓ1 image reconstruc- +tion loss between the original image x, the generated image +x′ +ori and the cycle-generated image: +Limg = E[∥x − x′ +ori∥1] + E[∥x − x′′ +ori∥1]. +(1) +To enhance the disentanglement in the cycle consistency +reconstruction, we also calculate a ℓ1 feature reconstruction +loss: +Lfeat = E[∥fid − Eid(x′ +new)∥1]+ +E[∥fid − Eid(x′′ +ori)∥1]. +(2) +The discriminator D attempts to distinguish between +real and generated images with adversarial losses: +Ladv = E[log D(x) + log(1 − D(x′ +ori))]+ +E[log D(x) + log(1 − D(x′ +new))]+ +E[log D(x) + log(1 − D(x′′ +ori))]. +(3) +Remark. As shown in Fig. 2, we can switch 2D gray +images [17], [25], switch meshes between random persons +or rotate one’s own mesh to introduce new structures as +generation guidance. Although stronger pose and view- +point variances can be introduced into generation, random + +5 +TABLE 3 +Examples of 3D mesh guided generation on Market-1501 dataset. +Each mesh is rotated by 45°, 90°, 135°, 180°, 225°, 270° and 315°. +0° +45° +90° +135° +180° +225° +270° +315° +→ +→ +→ +→ +switching hinders conservation of body shape information. +After testing, we find that the most appropriate way to +preserve body shape and generate accurate images is Mesh +rotation, which yields higher performance in Table 4. +3.1.2 +D-mixup (id-related augmentation) +As shown in Fig. 1. (c), given two random person images xi +and xj in a mini-batch, we encode the images into identity +features Eid(xi) → fid(i) and Eid(xj) → fid(j). We follow +the original Mixup [26] in using a Beta distribution with a +hyper-parameter α to randomly sample a mixing coefficient +λ: +λ = Beta(α, α), λ∗ = max(λ, 1 − λ) +fid(mix) = λ∗ · fid(i) + (1 − λ∗) · fid(j), +(4) +where λ∗ renders the mixed identity more similar to xi. To +conserve corresponding body shape information, we use +the original structure of xi, rather than xj as the gener- +ation guidance. A mixed person image (see more inter- +polated examples in Fig. 3) can be generated by combin- +ing mixed identity features and original structure features +G(fid(mix), sori(i)) → x′ +mix. The discriminator D attempts +to distinguish between real and mixed images with the +adversarial loss: +Ladv mix = E[log D(x) + log(1 − D(x′ +mix))]. +(5) +More discussion about feature regularization losses is +provided in Supplementary Materials Section A. +3.1.3 +Overall generative loss +The overall GAN loss combines the above losses (1), (2), (3) +and (5) with a weighting coefficient λrecon: +Lgan = λrecon(Limg + Lfeat) + Ladv + Ladv mix. +(6) +Mesh switch +Mesh rotation +2D gray image switch +Fig. 2. Different ways of introducing structural variance (2D gray image +switch [25], Mesh switch and Mesh rotation) into generation. +TABLE 4 +Performance comparison of rotating one mesh and switching two +random meshes in the generation. +Method +Duke→Market +Market→Duke +mAP +Rank1 +mAP +Rank1 +2D gray image switch [25] +60.1 +78.8 +59.5 +76.2 +Mesh switch +74.2 +88.5 +60.6 +76.9 +Mesh rotation +74.4 +89.7 +61.3 +78.0 +3.2 +Contrastive Module +The described generative module generates augmented +views of a person image, which can form positive view pairs +for the contrastive module. By maximizing similarity be- +tween positive pairs, the shared identity encoder is aimed at +building robust representations that are invariant to distor- +tions. For one identity, there are commonly several positive +images in the dataset, which are recorded in different poses, +camera styles and backgrounds. Only maximizing similarity +between an image and its self-augmented views leads to +sub-optimal performance. Moreover, previous methods [10], +[11] have demonstrated the effectiveness of mining a large +number of negative samples in contrastive learning. +In order to mine more positives and a large number of +negatives, we generate pseudo labels on a memory bank [30] +that stores all representations M corresponding to dataset +images X . Given a representation f t in the current epoch, +the corresponding memory bank representation M[i] is +updated with a momentum hyper-parameter β: +M[i]t = β · M[i]t−1 + (1 − β) · f t, +(7) +where M[i]t and M[i]t−1 respectively refer to the memory +bank representations in the t and t − 1 epochs. The mem- +ory bank stores moving averaged representations, which +stabilize the pseudo label generation. To further enhance +the pseudo label quality, we compute k-reciprocal re-ranked +Jaccard distance [62] between memory bank representations, +which are then fed into a clustering algorithm DBSCAN [63] +to generate pseudo labels Y = {y1, y2, ..., yN}. During the +training, the pseudo labels are renewed at the beginning of +each epoch. We design a Rotation Contrast and a Mixup +Contrast respectively for the two types of generated views. +3.2.1 +Rotation Contrast (for id-unrelated augmentation) +As shown in Fig. 1. (d), the original image x and the +generated image x′ +new are encoded by the shared identity +encoder into two identity feature vectors Eid(x) → f and +Eid(x′ +new) → f ′ +new. For a representation f with a pseudo +label yi, we randomly sample a positive representation fpos + +6 +𝑃1 +𝑃2 +Fig. 3. Linear interpolation of disentangled identity features between two +persons respectively from Market-1501 and DukeMTMC-reID. +of the same pseudo label yi and K negative representations +of pseudo labels different to yi from the memory bank. +Three positive pairs can be formed, i.e., (f, fpos), (f, f ′ +new) +and (fpos, f ′ +new). The f ′ +new and sampled K negative rep- +resentations from the memory bank are used to form K +negative pairs. We define three view-invariant losses to +attract three positive pairs while repulsing K negative pairs: +Lvi = E[log (1 + +�K +i=1 exp (< f ′ +new · ki > /τ) +exp (< f · fpos > /τ) +)], +(8) +L′ +vi = E[log (1 + +�K +i=1 exp (< f ′ +new · ki > /τ) +exp (< f ′new · f > /τ) +)], +(9) +L′′ +vi = E[log (1 + +�K +i=1 exp (< f ′ +new · ki > /τ) +exp (< f ′new · fpos > /τ) +)], +(10) +where < · > denotes the cosine similarity between two +feature vectors. τ is a temperature hyper-parameter to +sharpen the cosine similarity. ki denotes negative represen- +tations sampled from the memory bank. Presented three loss +functions enable the contrastive module to maximize the +similarity between original view f, generated view f ′ +new +and positive memory view fpos. At the same time, the +similarity between generated view f ′ +new and K negative +memory views is minimized, which encourages the genera- +tive module to refine the generated view f ′ +new that should +be different from a large number of negative samples. +3.2.2 +Mixup Contrast (for id-related augmentation) +The mixed image x′ +mix is encoded by the shared identity +encoder into a mixed identity feature vector Eid(x′ +mix) → +f ′ +mix, see Fig. 1. (e). Towards learning a smoother decision +boundary between two clusters, as illustrated in Fig. 4, we +design a Mixup Contrast for f ′ +mix. As certain instances in +a cluster are close to the decision boundary between two +prototype +𝑷𝟏 +𝑷𝟐 +0.6𝑃1 + 0.4𝑃2 +0.4𝑃1 + 0.6𝑃2 +Fig. 4. Mixup Contrast targets at learning a smoother decision boundary +between two persons P1 and P2 by contrasting in-between samples with +in-between prototypes. +clusters, whereas the others are far away, we define an +averaged prototype for a cluster: +pa = 1 +Na +� +M[i]∈ya +M[i], +(11) +where Na is the number of instances belonging to the cluster +a. +Given a random image representation f, we use a soft- +max cross-entropy loss Lproto to make f converge to the +cluster prototype, which encourages the compactness of a +cluster: +Lproto = E[log (1 + +�|Y|−1 +i=1 +exp (f · pi) +exp (f · p+) +)], +(12) +where p+ is the corresponding prototype of f and pi denotes +other cluster prototypes. |Y| is the number of clusters. Given +that certain clusters may contain more instances that are +close to decision boundaries with other clusters, compact +clusters provide stable mixed prototypes. +Based on the pseudo labels, we define a mixed prototype +vector between two clusters i and j: +pmix = λ∗ · pi + (1 − λ∗) · pj, +(13) +where λ∗ is the same mixing coefficient as in Eq. (4). +For the mixed representation f ′ +mix, we use another soft- +max cross-entropy loss to maximize its similarity with the +mixed prototype pmix and minimize its similarity with +|Y| − 2 negative prototypes that do not belong to the two +clusters i and j: +Lmix = E[log (1 + +�|Y|−2 +i=1 +exp (f ′ +mix · pi) +exp (f ′ +mix · pmix) +)]. +(14) +As opposed to cosine similarity in Eq. (8), (9) and (10), we do +not compute normalized similarity, as the average operation +for computing prototype vectors performs as normalization. +3.2.3 +Overall contrastive loss +The overall contrastive loss combines the above losses (8), +(9), (10), (12) and (14): +Lcontrast = λvi(Lvi+L′ +vi+L′′ +vi)+λmix(Lproto+Lmix). (15) + +7 +3.3 +Joint Training +Our proposed framework incorporates a generative module +and a contrastive module. The generative module disentan- +gles a person image representation into identity and struc- +ture features, which allows for learning purified identity +features for person ReID. The contrastive module learns +invariance via contrasting augmented images. If we replace +the GAN-based augmentation with traditional data aug- +mentation techniques, both modules can be trained sepa- +rately. However, a separate training leads to sub-optimal +performance for both of them. To address this issue, we +couple the two modules with a shared identity encoder in a +joint training framework. In the setting of joint training, both +modules work collaboratively to achieve one objective: en- +hancing the discriminality of identity representations. Inside +GCL+, the generative module provides both, id-unrelated +and id-related augmentations for the contrastive module. +On the other hand, the contrastive module maximizes the +similarity between positive views, while repulsing negative +views, which, in turn, refines the identity representations for +a better generation quality. Both modules mutually promote +each other’s performance in the joint training, leading to an +optimal ReID performance. In our proposed framework, a +forward propagation is firstly conducted on the generative +module and subsequently on the contrastive module. A +backward propagation is then conducted with an overall +loss that combines Eq. (6) and Eq. (15): +Loverall = Lgan + Lcontrast. +(16) +4 +EXPERIMENT +4.1 +Datasets and Evaluation Protocols +We evaluate our proposed method GCL+ on five main- +stream person ReID benchmarks, including three image- +based datasets: Market-1501 [64], DukeMTMC-reID [65], +MSMT17 [14] and two video-based datasets: MARS [66] +and DukeMTMC-VideoReID [67]. Market-1501 dataset is +collected in front of a supermarket in Tsinghua University +from 6 cameras. It is composed of 12,936 images of 751 +identities for training and 19,732 images of 750 identities for +testing. DukeMTMC-reID is collected from 8 cameras in- +stalled in the campus of Duke University. It contains 16,522 +images of 702 persons for training, 2,228 query images and +17,661 gallery images of 702 persons for testing. MSMT17 is +a large-scale Re-ID dataset, which includes 32,621 training +images of 1,041 identities and 93,820 testing images of 3,060 +identities collected from 15 cameras deployed in both indoor +and outdoor scenes. MARS is a large-scale video-based +person ReID dataset. The dataset contains 17,503 tracklets +of 1,261 identities collected from 6 cameras, where 625 iden- +tities are used for training and the other 636 identities are +used for testing. DukeMTMC-VideoReID is a video-based +person ReID dataset derived from DukeMTMC [65] dataset. +DukeMTMC-VideoReID contains 2,196 training tracklets of +702 identities and 2,636 testing tracklets of other 702 identi- +ties. +As our method includes a GAN and a contrastive +module, we report results for both unsupervised person +ReID and generation quality evaluations. For unsupervised +person ReID evaluation, we provide results under both, +unsupervised domain adaptation and fully unsupervised +settings. We report both, Cumulative Matching Character- +istics (CMC) at Rank1, Rank5, Rank10 accuracies, as well +as mean Average Precision (mAP) on the testing set. For +the generation quality evaluation, we conduct a qualitative +comparison between our method and state-of-the-art meth- +ods on generated images. +4.2 +Implementation details +We introduce implementation details pertained to network +design and general training configurations, as well as three- +step optimization. +Network design. Our network design related to the +identity encoder Eid, the structure encoder Estr, the de- +coder G and the discriminator D has been mainly inspired +by [17], [25]. In the following descriptions, we denote the +size of feature maps in channel×height×width. 1) Eid is +an ImageNet [35] pre-trained ResNet50 [68] with slight +modifications. The original fully connected layer is replaced +by a batch normalization layer and a fully connected em- +bedding layer, which outputs identity representations f in +512×1×1 for the contrastive module. In parallel, we add a +part average pooling that outputs identity features fid in +2048×4×1 for the generative module. 2) Estr is composed +of four convolutional and four residual layers, which output +structure features fstr in 128×64×32. 3) G contains four +residual and four convolutional layers. Every residual layer +contains two adaptive instance normalization layers [18] +that transform fid into scale and bias parameters. 4) D is a +multi-scale PatchGAN [19] discriminator at 64×32, 128×64 +and 256×128. +General training configurations. Our framework is im- +plemented under Pytorch [69] and trained with one Nvidia +V100 GPU. The inputs are resized to 256×128. We empir- +ically set a large weight λrecon = 5 for reconstruction in +Eq. (6). With a batch size of 16, we use SGD to train Eid +and Adam optimizer to train Estr, G and D. Learning rate +in Adam is set to 1 × 10−4 and 3.5 × 10−4 in SGD and +are multiplied by 0.1 after 10 epochs. DBSCAN maximal +neighborhood distance is set to 0.5 and minimal sample +number is set to 4. The number of negatives K is 8192. +For testing, Eid outputs representations f of dimension 512. +For video-based person ReID, due to our GPU memory +constraint, we randomly sample 2 frames per tracklet on +MARS and 8 frames per tracklet on DukeMTMC-VideoReID +for training. For testing, all the frames from each tracklet +are used to calculate a unified tracklet representation for +similarity ranking. Other settings are kept the same as +image-based peron ReID settings. +Three-stage optimization. To reduce the noise from +imperfect generated images at early epochs, we train the +four modules Eid, Estr, G and D in a three-stage opti- +mization. Stage 1 Eid warm-up: we use a state-of-the-art +unsupervised ReID method to warm up Eid, e.g., ACT [55], +MMCL [59] and JVTC [60]. Stage 2 Estr, G and D warm- +up: we freeze Eid and warm up Estr, G, and D only with +the overall GAN loss in Eq. (6) for 40 epochs. Stage 3 joint +training: we bring in the memory bank and the pseudo +labels to jointly train the whole framework with the overall +loss in Eq. (16) for another 20 epochs. + +8 +74 +74.2 +74.4 +73.8 +74 +89.1 +89.6 +89.7 +89.1 +88.7 +0.2 +0.4 +0.6 +0.8 +1 +Duke→Market +61.2 +61 +61.3 +61.1 +60.9 +77.2 +77.5 +78 +76.9 +76.9 +0.2 +0.4 +0.6 +0.8 +1 +Market→Duke +mAP +Rank1 +Fig. 5. Hyper-parameter analysis on α for mixup coefficient on +Duke→Market and Market→Duke tasks. +88.9 +89.7 +89.6 +89.5 +89.1 +74 +74.4 +74.3 +74.1 +73.8 +0.1 +0.2 +0.3 +0.4 +0.5 +β +54.2 +73.8 +74.4 +74 +73.8 +74.7 +88.8 +89.7 +89 +88.6 +0.02 +0.03 +0.04 +0.05 +0.06 +τ +mAP +Rank1 +Fig. 6. Hyper-parameter analysis on β for memory momentum and τ for +contrastive temperature on Duke→Market task. +4.3 +Unsupervised ReID Evaluation +To validate the effectiveness of each component, we con- +duct parameter analysis and ablation experiments with a +JVTC [60] baseline. As JVTC+ is the enhanced version of +JVTC with a camera temporal distribution post-processing, +the performance boost from the post-processing is almost +fixed. Thus, the ablation experiments show similar vari- +ance with JVTC and JVTC+ baselines. We further compare +our method with state-of-the-art unsupervised person ReID +with three different baselines to show the generalizability of +our method. +89.6 +89.3 +89.7 +89 +89 +73.5 +74 +74.4 +74 +74.4 +0.6 +0.8 +1 +1.2 +1.4 +88.9 +89.1 +89.7 +89.3 +89.4 +73.8 +73.9 +74.4 +74.1 +74.1 +0.6 +0.8 +1 +1.2 +1.4 +Rank1 +mAP +89.2 +89.4 +89.7 +89.2 +89.2 +73.6 +74.3 +74.4 +73.7 +73.7 +3 +4 +5 +6 +7 +𝝀𝒗𝒊 +𝝀𝒊𝒅 +𝝀𝒓𝒆𝒄𝒐𝒏 +Fig. 7. Hyper-parameter analysis on balancing coefficients λrecon for +reconstruction weight, λvi for rotation contrast weight and λmix for +mixup contrast weight on Duke→Market task. +TABLE 5 +Performance under different clustering neighborhood distance +threshold. ‘N’ is the approximate number of pseudo-identities. +Threshold +Duke→Market +Market→Duke +N +mAP +Rank1 +N +mAP +Rank1 +0.4 +∼642 +74.5 +89.4 +∼840 +60.9 +77.1 +0.45 +∼605 +74.4 +89.4 +∼810 +61.2 +77.4 +0.5 +∼584 +74.4 +89.7 +∼786 +61.3 +78.0 +0.55 +∼540 +73.6 +88.4 +∼744 +61.1 +76.8 +0.6 +∼500 +72.4 +87.6 +∼697 +60.7 +77.7 +4.3.1 +Parameter analysis +Hyper-parameters, such as mixing coefficient α, memory +momentum β and view-invariant contrastive loss temper- +ature τ, play important roles inside our proposed GCL+ +framework for better unsupervised person ReID perfor- +mance. We vary their values to analyze the sensitivity +of each hyper-parameter inside our proposed framework +GCL+. +For Beta distribution, a larger α results in a higher pos- +sibility that λ gets closer to 0.5. ReID performance on both +Duke→Market and Market→Duke tasks with reference to α +is reported in Fig. 5. On both tasks, the optimal performance +is achieved, in case of α is around 0.6. As a consequence, α +is set to 0.6 in our framework. +The value of β controls the memory updating speed. +The value of τ amplifies the cosine similarity between con- +trastive views. An overlarge or undersized value, generally +speaking, introduces more noise for contrastive learning. +We report the performance variation with reference to β +and τ on Duke→Market task in Fig. 6. We find that the +performance is more sensitive to the similarity temperature +τ. Based on the results, we set β to 0.2 and τ to 0.04. +The number of possible pseudo-identities N is related +to clustering hyper-parameters, such as maximal neigh- +borhood distance threshold and minimal cluster sample +number. The distance threshold of DBSCAN is the maximal +distance between two samples for one to be considered as in +the neighborhood of the other. A larger distance threshold +enlarges the radius of a cluster, making more samples be +considered into a same cluster (N becomes smaller). As +shown in Table 5, the threshold value only slightly affects +ReID performance. +As our framework jointly optimize the generative and +contrastive modules, we set weight coefficients to balance +different loss functions in the two modules. We vary the +balancing coefficients λrecon, λvi and λmix in Equation (6) +and (15). The corresponding results are reported in Fig. 7. +Overall, the different values in the tested range only slightly +influence the final results. Based on the results, we set +λrecon = 5, λvi = 1 and λmix = 1. +4.3.2 +Ablation study +Contrastive learning methods strongly rely on data aug- +mentation to create different augmented views for con- +trasting. Our proposed GCL+ outperforms traditional con- +trastive learning methods by replacing traditional data aug- +mentation techniques with GAN-based augmentation tech- +niques. To validate the effectiveness of our proposed GAN- +based augmentation techniques and contrastive losses, we +conduct ablation experiments on both Market-1501 and +DukeMTMC-reID datasets. +Data augmentation. Data augmentation techniques can +be caterogized into id-unrelated and id-related augmen- +tation. Id-unrelated augmentation creates intra-image vi- +sual distortions. In contrast, id-related augmentation cre- +ates inter-image visual distortions, which affects image +identities. We compare results of traditional and genera- +tive data augmentation under fully unsupervised setting +and domain adaptation setting in Table 6. For traditional +data augmentation, we use multiple popular person ReID + +9 +TABLE 6 +Ablation study under fully unsupervised and UDA settings on traditional (w/o GAN) and generative (w/ GAN) data augmentation for the contrastive +module. ‘Multi’ refers to multiple commonly used data augmentation techniques for person ReID, including random flipping, padding, cropping and +erasing. ‘Rotation’ refers to our proposed mesh-guided rotation. ‘Mixup’ is conducted on image level, while ‘F-Mixup’ is conducted on feature level. +Fully unsupervised +ID-unrelated +ID-related +Market +Duke +Multi +Rotation +Mixup +F-Mixup +D-Mixup +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +w/o GAN +Baseline +47.2 +75.4 +86.7 +90.5 +43.9 +66.8 +77.6 +81.0 +✓ +58.2 +81.1 +91.0 +93.5 +50.8 +70.8 +80.9 +83.8 +✓ +✓ +60.0 +82.5 +91.6 +94.0 +51.0 +71.1 +80.8 +84.1 +w/ GAN +✓ +63.8 +83.4 +91.8 +94.3 +53.1 +72.8 +81.2 +83.7 +✓ +✓ +65.9 +84.8 +92.5 +94.3 +54.3 +73.6 +82.5 +84.9 +✓ +✓ +66.1 +84.3 +92.4 +94.6 +54.2 +73.7 +82.4 +85.5 +✓ +✓ +66.3 +85.3 +92.9 +94.6 +54.6 +74.2 +82.8 +85.6 +UDA +ID-unrelated +ID-related +Duke→Market +Market→Duke +Multi +Rotation +Mixup +F-Mixup +D-Mixup +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +w/o GAN +Baseline +65.0 +85.7 +93.4 +95.9 +56.5 +73.9 +84.4 +87.8 +✓ +70.4 +86.9 +94.3 +95.8 +57.0 +74.2 +84.2 +87.2 +✓ +✓ +70.7 +87.8 +94.1 +96.3 +57.7 +74.5 +85.0 +88.0 +w/ GAN +✓ +72.5 +88.7 +94.8 +96.3 +59.9 +75.9 +86.2 +88.5 +✓ +✓ +73.0 +88.9 +94.8 +96.4 +60.4 +76.5 +85.9 +88.3 +✓ +✓ +72.7 +88.8 +95.1 +96.3 +60.2 +76.7 +86.1 +88.1 +✓ +✓ +74.4 +89.7 +95.5 +96.7 +61.3 +78.0 +86.8 +89.1 +TABLE 7 +Ablation study on three view-invariant losses in Rotation Contrast and +two prototype losses in Mixup Contrast. +Lvi +L′ +vi +L′′ +vi +Lproto +Lmix +Duke→Market +Market→Duke +mAP +R1 +mAP +R1 +✓ +61.6 +82.4 +51.7 +70.6 +✓ +✓ +69.1 +85.6 +58.3 +74.8 +✓ +✓ +✓ +72.5 +88.7 +59.9 +75.9 +✓ +✓ +✓ +✓ +72.8 +88.8 +60.6 +76.9 +✓ +✓ +✓ +✓ +✓ +74.4 +89.7 +61.3 +78.0 +75% +80% +85% +90% +1 +3 +5 +7 +9 +11 +13 +15 +17 +19 +Trad +Rot +Full +Fig. 8. Normalized Mutual Information (NMI) during 20 joint training +epochs on Market-1501. ‘Trad’ refers to traditional data augmentation +techniques. ‘Rot’ refers to id-unrelated mesh-guided rotation. ‘Full’ refers +to combining id-unrelated mesh-guided rotation and id-related D-Mixup. +data augmentation techniques, including random flipping, +padding, cropping and erasing [12], as id-unrelated aug- +mentation and Mixup [26] as id-related augmentation. Even +with these traditional data augmentation, our contrastive +module significantly outperforms the baseline. When we +replace traditional data augmentation with generative data +augmentation, the unsupervised person ReID performance +can be further improved. Our proposed mesh-guided rota- +tion (Rotation) works better than the multiple commonly +used data augmentation techniques (Multi) for id-unrelated +augmentation. Meanwhile, our proposed D-Mixup achieves +better performance than the image-level Mixup and feature- +level Mixup (F-Mixup) for id-related augmentation. +Effects on pseudo labels. Robust identity representa- +tions should have a better intra-class compactness and inter- +class separability, which leads to better pseudo label quality. +We evaluate our pseudo label quality by measuring the +Normalized Mutual Information (NMI) [71] between our +pseudo labels and ground truth labels. As illustrated in +Fig. 8, traditional data augmentation (Trad) works well at +the beginning, but ends up in a worse quality. We argue that +traditional data augmentation brings to the fore undesirable +distortions on identity features, which easily leads to over- +fitting for id-sensitive tasks. Deviating from that, GAN- +based augmentation introduces more noise at the beginning, +however avoids over-fitting in the final training epochs. In +addition, our full GCL+ (Full) conducts both GAN-based +id-unrelated and id-related augmentation, which achieves +better pseudo label quality than only id-unrelated mesh- +guided rotation (Rot). +Contrastive loss. To learn maximal invariance from gen- +erated image and memory stored image, we have formed +three positive pairs for Rotation Contrast, namely (f, fpos), +(f, f ′ +new) and (fpos, f ′ +new). By maximizing the similarity be- +tween these three positive pairs in Equation (8), (9) and (10), +our objective is to build identity representations, which are +invariant to instance-level pose, view-point and background +variance. Meanwhile, we use identity prototypes and mixed +prototypes in Mixup Contrast to learn a smoother class-level +decision boundary with Equation (12) and (14). To confirm +the contribution from these contrastive losses, we gradually +add each into our framework and report the corresponding +results in Table 7. The results indicate that our proposed +contrastive losses effectively contribute to learning robust +representations for unsupervised person ReID. +4.3.3 +Comparison with state-of-the-art methods +Image-based person ReID. We compare our proposed +GCL+ with state-of-the-art unsupervised ReID methods +under three purely unsupervised and four unsupervised +domain adaptation evaluation protocols. We evaluate the +performance of GCL+ with different baselines, including +MMCL [59], JVTC [60] and ACT [55], to demonstrate the +generalizability of our proposed method. +Under the fully unsupervised setting, we report as- +sociated results on Market-1501, DukeMTMC-reID and +MSMT17 dataset in Table 8. We firstly provide results of +state-of-the-art methods, including BUC [57], SoftSim [58], +TSSL [61], MMCL [59], JVTC [60], JVTC+ [60], Meta- +Cam [70], as well as our previous work GCL [9], on the +three datasets. Our proposed method GCL+ significantly +improves the unsupervised person ReID performance from + +10 +TABLE 8 +Comparison of fully unsupervised ReID methods (%) on Market1501, DukeMTMC-reID and MSMT17 datasets. We test our proposed method on +several baselines, see names in parentheses. +Method +Reference +Market1501 +DukeMTMC-reID +MSMT17 +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +BUC [57] +AAAI’19 +29.6 +61.9 +73.5 +78.2 +22.1 +40.4 +52.5 +58.2 +- +- +- +- +SoftSim [58] +CVPR’20 +37.8 +71.7 +83.8 +87.4 +28.6 +52.5 +63.5 +68.9 +- +- +- +- +TSSL [61] +AAAI’20 +43.3 +71.2 +- +- +38.5 +62.2 +- +- +- +- +- +- +MMCL [59] +CVPR’20 +45.5 +80.3 +89.4 +92.3 +40.2 +65.2 +75.9 +80.0 +11.2 +35.4 +44.8 +49.8 +JVTC [60] +ECCV’20 +41.8 +72.9 +84.2 +88.7 +42.2 +67.6 +78.0 +81.6 +15.1 +39.0 +50.9 +56.8 +JVTC+ [60] +ECCV’20 +47.5 +79.5 +89.2 +91.9 +50.7 +74.6 +82.9 +85.3 +17.3 +43.1 +53.8 +59.4 +MetaCam [70] +CVPR’21 +61.7 +83.9 +92.3 +- +53.8 +73.8 +84.2 +- +15.5 +35.2 +48.3 +- +GCL(MMCL) [9] +CVPR’21 +54.9 +83.7 +91.6 +94.0 +49.3 +69.7 +79.7 +82.8 +- +- +- +- +GCL(JVTC) [9] +CVPR’21 +63.4 +83.7 +91.6 +94.3 +53.3 +72.4 +82.0 +84.9 +18.0 +41.6 +53.2 +58.4 +GCL(JVTC+) [9] +CVPR’21 +66.8 +87.3 +93.5 +95.5 +62.8 +82.9 +87.1 +88.5 +21.3 +45.7 +58.6 +64.5 +GCL+(MMCL) +This paper +56.0 +84.0 +91.4 +93.7 +49.5 +70.2 +80.2 +83.3 +- +- +- +- +GCL+(JVTC) +This paper +66.3 +85.3 +92.9 +94.6 +54.6 +74.2 +82.8 +85.6 +19.2 +44.7 +56.4 +61.4 +GCL+(JVTC+) +This paper +69.3 +89.0 +94.6 +96.0 +63.5 +83.1 +87.4 +88.8 +22.0 +47.9 +61.3 +67.1 +TABLE 9 +Comparison of unsupervised domain adaptive ReID methods (%) between Market1501, DukeMTMC-reID and MSMT17 datasets. We test our +proposed method on several baselines, see names in parentheses. +Method +Reference +Duke→Market +Market→Duke +Market→MSMT17 +Duke→MSMT17 +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +ECN [7] +CVPR’19 +43.0 +75.1 +87.6 +91.6 +40.4 +63.3 +75.8 +80.4 +8.5 +25.3 +36.3 +42.1 +10.2 +30.2 +41.5 +46.8 +PDA [21] +ICCV’19 +47.6 +75.2 +86.3 +90.2 +45.1 +63.2 +77.0 +82.5 +- +- +- +- +- +- +- +- +CR-GAN [41] +ICCV’19 +54.0 +77.7 +89.7 +92.7 +48.6 +68.9 +80.2 +84.7 +- +- +- +- +- +- +- +- +SSG [54] +ICCV’19 +58.3 +80.0 +90.0 +92.4 +53.4 +73.0 +80.6 +83.2 +13.2 +31.6 +49.6 +- +13.3 +32.2 +51.2 +- +MMCL [59] +CVPR’20 +60.4 +84.4 +92.8 +95.0 +51.4 +72.4 +82.9 +85.0 +15.1 +40.8 +51.8 +56.7 +16.2 +43.6 +54.3 +58.9 +ACT [55] +AAAI’20 +60.6 +80.5 +- +- +54.5 +72.4 +- +- +- +- +- +- +- +- +- +- +DG-Net++ [17] +ECCV’20 +61.7 +82.1 +90.2 +92.7 +63.8 +78.9 +87.8 +90.4 +22.1 +48.4 +60.9 +66.1 +22.1 +48.8 +60.9 +65.9 +JVTC [60] +ECCV’20 +61.1 +83.8 +93.0 +95.2 +56.2 +75.0 +85.1 +88.2 +19.0 +42.1 +53.4 +58.9 +20.3 +45.4 +58.4 +64.3 +ECN+ [56] +TPAMI’20 +63.8 +84.1 +92.8 +95.4 +54.4 +74.0 +83.7 +87.4 +15.2 +40.4 +53.1 +58.7 +16.0 +42.5 +55.9 +61.5 +JVTC+ [60] +ECCV’20 +67.2 +86.8 +95.2 +97.1 +66.5 +80.4 +89.9 +92.2 +25.1 +48.6 +65.3 +68.2 +27.5 +52.9 +70.5 +75.9 +MMT [8] +ICLR’20 +71.2 +87.7 +94.9 +96.9 +65.1 +78.0 +88.8 +92.5 +22.9 +49.2 +63.1 +68.8 +23.3 +50.1 +63.9 +69.8 +CAIL [50] +ECCV’20 +71.5 +88.1 +94.4 +96.2 +65.2 +79.5 +88.3 +91.4 +20.4 +43.7 +56.1 +61.9 +24.3 +51.7 +64.0 +68.9 +MetaCam [70] +CVPR’21 +76.5 +90.1 +- +- +65.0 +79.5 +- +- +- +- +- +- +- +- +- +- +GCL(ACT) [9] +CVPR’21 +66.7 +83.9 +91.4 +93.4 +55.4 +71.9 +81.6 +84.6 +- +- +- +- +- +- +- +- +GCL(JVTC) [9] +CVPR’21 +73.4 +89.1 +95.0 +96.6 +60.4 +77.2 +86.2 +88.4 +21.5 +45.0 +57.1 +66.5 +24.9 +50.8 +63.4 +68.9 +GCL(JVTC+) [9] +CVPR’21 +75.4 +90.5 +96.2 +97.1 +67.6 +81.9 +88.9 +90.6 +27.0 +51.1 +63.9 +69.9 +29.7 +54.4 +68.2 +74.2 +GCL+(ACT) +This paper +67.5 +84.3 +92.6 +94.2 +56.8 +73.5 +82.8 +85.1 +- +- +- +- +- +- +- +- +GCL+(JVTC) +This paper +74.4 +89.7 +95.5 +96.7 +61.3 +78.0 +86.8 +89.1 +23.0 +48.3 +60.6 +65.8 +25.5 +52.7 +65.2 +70.2 +GCL+(JVTC+) +This paper +76.5 +91.6 +96.3 +97.6 +68.3 +82.6 +89.4 +91.2 +27.8 +53.8 +66.9 +72.5 +31.5 +57.9 +70.3 +76.1 +the three baselines MMCL, JVTC and JVTC+. The proposed +new D-Mixup and Mixup Contrast in our framework GCL+ +consistently surpasses the performance of our previous +work GCL with the three different baselines. With the strong +baseline JVTC+, our method achieves state-of-the-art perfor- +mance on the three datasets. +Under the unsupervised domain adaptation setting, we +report related results on four mainstream benchmarks, in- +cluding Duke→Market, Market→Duke, Market→MSMT17 +and Duke→MSMT17 in Table 9. Our proposed method +GCL+ additionally achieves better performance than state- +of-the-art methods, including ECN [7], PDA [21], CR-GAN +[41], SSG [54], MMCL [59], ACT [55], DG-Net++ [17], JVTC +[60], ECN+ [56], JVTC+ [60], MMT [8], CAIL [50], Meta- +Cam [70], as well as our previous work GCL [9]. Among +these methods, PDA, CR-GAN and DG-Net++ share certain +similarity with our proposed method GCL+, in that they +are based on GAN. However, PDA and DG-Net++ used +either 2D skeleton or random gray-scaled images as guid- +ance, which could not preserve body shape information. +Further, PDA, CR-GAN and DG-Net++ did not manipulate +identity features to generate in-between identity images. +CAIL [50] has considered cross-domain Mixup, where in- +terpolated structures may introduce more noise on identity +features. Our proposed D-Mixup does not suffer from such +interpolated structures. In addition, cross-domain Mixup +interpolates images from two domains, while our proposed +D-Mixup interpolates intra-domain images, which is more +flexible for fully unsupervised ReID. +Video-based person ReID. We compare our proposed +GCL+ with state-of-the-art unsupervised video person ReID +methods on MARS and DukeMTMC-VideoReID datasets. +RACE [72] and EUG [67] leverage a labeled video tracklet +per identity to initialize their models. These one-example +video-based ReID methods can not actually be considered as +unsupervised. DAL [73], TAUDL [74] and UTAL [75] utilize +camera labels of each tracklet and try to associate tracklets of +a same person across different cameras. OIM [76], BUC [57] +and TSSL [61] are fully unsupervised video person ReID +methods. We use the fully unsupervised method BUC as +our baseline. As shown in Table 10, our proposed methods +GCL (view-point augmentation) and GCL+ (view-point and +in-between identity augmentation) significantly outperform +previous unsupervised video-based person ReID methods. + +11 +TABLE 10 +Comparison with the state-of-the-art methods on two video-based re-ID datasets, MARS and DukeMTMC-VideoReID. The “Labels” column +indicates the labels used in each method. “OneEx” denotes the one-example annotation per identity. “Camera” refers to camera annotation. +“Baseline (BUC)” refers to our reproduced results. +Method +Labels +MARS +DukeMTMC-VideoReID +mAP +R1 +R5 +R10 +mAP +R1 +R5 +R10 +RACE [72] +OneEx +24.5 +43.2 +57.1 +62.1 +- +- +- +- +EUG [67] +OneEx +42.4 +62.6 +74.9 +- +63.2 +72.7 +84.1 +- +DAL [73] +Camera +23.0 +49.3 +65.9 +72.2 +- +- +- +- +TAUDL [74] +Camera +29.1 +43.8 +59.9 +72.8 +- +- +- +- +UTAL [75] +Camera +35.2 +49.9 +66.4 +77.8 +- +- +- +- +OIM [76] +None +13.5 +33.7 +48.1 +54.8 +43.8 +51.1 +70.5 +76.2 +BUC [57] +None +29.4 +55.1 +68.3 +72.8 +66.7 +74.8 +86.8 +89.7 +TSSL [61] +None +30.5 +56.3 +- +- +64.6 +73.9 +- +- +Baseline (BUC [57]) +None +32.0 +51.1 +66.5 +71.6 +67.1 +72.9 +86.2 +90.0 +GCL +None +48.6 +64.8 +77.5 +82.0 +75.9 +80.1 +90.5 +93.7 +GCL+ +None +50.1 +66.5 +78.7 +82.2 +76.3 +80.9 +91.5 +94.2 +4.4 +Generation Quality Evaluation +4.4.1 +Ablation study +We conduct a qualitative ablation study, represented in +Fig. 9 to demonstrate that our proposed contrastive module +can improve generative quality for person image generation. +Unconditional GANs learn a data distribution via recon- +struction and adversarial training of each image, which +then generate new images that fit the learned distribution. +However, unconditional GANs generate from features of a +single image and neglect the shared features of different +images of one person (or class). Conditional GANs generally +use human-annotated identity labels to learn shared class- +level features, which are more view-invariant. Our pro- +posed GCL+ introduces an unsupervised way to learn view- +invariant class-level features for person image generation by +contrasting pseudo positive views. +We +illustrate +two +examples +respectively +from +the +Market-1501 and DukeMTMC-reID datasets in Fig. 9 to +validate the effectiveness of our proposed contrastive mod- +ule for person image generation. Given a target person, a +robust identity representation should contain salient fea- +tures shared by the majority of observations in different +view-points and poses. In the case that GCL+ is trained +without Lcontrast, our generative module tends to focus +only on salient features of original image (black backpack +for the first example and blue jacket for the second example), +while neglecting salient features of other images of the same +person (yellow t-shirt for the first example and red backpack +for the second example). The contrastive module ensures the +consistency of identity features for generation in different +poses and view-points. +4.4.2 +Comparison with state-of-the-art methods +We conduct a qualitative comparison between our pro- +posed method GCL+ and state-of-the-art GAN-based per- +son ReID methods, including FD-GAN [20], IS-GAN [42], +DG-NET [25] and DG-NET++ [17]. We re-implement these +GAN-based person ReID methods based on their published +source code and generate six images per real image of +the Market-1501 dataset, as shown in Fig. 10. FD-GAN, +IS-GAN and DG-Net are supervised methods, which rely +on human-annotated labels to learn robust identity-level +features. We observe that images generated by FD-GAN and +IS-GAN suffer from evident visual blur, which may lose +detailed identity information after generation. Compared ++ ++ +𝑓𝑖𝑑 +𝑓𝑠𝑡𝑟 +w/o 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 +w/ 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 ++ ++ +Same ID +example +Fig. 9. Qualitative ablation study on the effectiveness of contrastive +loss in Eq. (15) for generation quality. Lcontrast allows for preserving +salient features from other views (yellow t-shirt for the first example +and red backpack for the second example) in identity representations +for generation in different poses and view-points. +TABLE 11 +Examples of 3D mesh guided generation on DukeMTMC-reID dataset. +0° +45° +90° +135° +180° +225° +270° +315° +→ +→ + +12 +FD-GAN +GCL+(ours) +IS-GAN +DG-Net +Real +DG-Net++ +Fig. 10. Comparison of generated images on Market-1501 dataset. Examples of FD-GAN, IS-GAN, DG-Net, DG-Net++ and GCL+ are generated +from same real images shown in the figure. We note that DG-Net++ and GCL+ are unsupervised methods. +TABLE 12 +Examples of 3D mesh guided generation on MSMT17 dataset. +0° +45° +90° +135° +180° +225° +270° +315° +→ +→ +to FD-GAN and IS-GAN, DG-Net can generate sharper +images. However, using randomly switched gray-scaled im- +ages as guidance is prone to result in incoherent body shape +and carrying. More comparison on the generative quality +between FD-GAN, IS-GAN, DG-Net and our method is +provided in Supplementary Materials Section B. As an UDA +method, DG-Net++ uses cross-domain gray-scaled images +as guidance, which, however, shares same problems in gen- +eration as DG-Net. Different from DG-Net++, our proposed +GCL+ is a fully unsupervised ReID method, which directly +augments data diversity in the target domain without the +need for a labeled source domain. Moreover, an image +in GCL+ is generated from its own rotated mesh, which +helps to conserve body shape information and does not add +extra carrying structures. The generated images from GCL+ +have higher quality and similarity to real images than other +methods. To validate the generative quality on DukeMTMC- +reID and MSMT17 datasets, we provide more examples in +Table 11 and Table 12. Consistency in the id-related space +and variance in the id-unrelated space validate the purity +(disentanglement quality) of identity representations in our +framework GCL+. We further provide tracklet examples +before and after our view-point rotation for video-based +person ReID in Fig. 11. The results show that our method +also works well for video-based person ReID. +4.4.3 +Failure case analysis +We show some failure cases from the rotation generative +model in Fig. 12. Actually, when there exists inconsistent +front-side and back-side patterns, the rotation-based genera- +tion can hardly generate accurate images after large rotation. +Rotate +Rotate +MARS tracklet +DukeMTMC-VideoReID tracklet +Fig. 11. Examples of tracklet frames before and after our view-point ro- +tation. Tracklets are respectively sampled from MARS and DukeMTMC- +VideoReID datasets. +For example, the model may consider visual patterns only +in the back side (backpack in the first row) and patterns +only in the front side (carrying objects in the second row) as +whole-body appearance features for generation. One possi- +ble solution is to use a 3D human-object arrangement mesh +generator [77] to help the generative model distinguish +humans and objects. +5 +CONCLUSION +In this paper, we propose an enhanced joint generative and +contrastive learning (GCL+) framework for unsupervised +person ReID. The framework is composed of a generative +module for data augmentation, as well as a contrastive module +aimed at learning invariance from generated variance. For +the generative module, we propose a 3D mesh guided GAN to +realize id-unrelated and id-related augmentation by respec- +tively rotating 3D meshes as generation guidance and in- +terpolating two identity representations. For the contrastive + +13 +0° +45° +90° +135° +180° +225° +270° +315° +real +Fig. 12. Failure cases of rotation-based generation. First row: the back- +pack can be generated onto the front side. Second row: the carrying +object can be generated onto the back side. +module, we design Rotation Contrast and Mixup Contrast, re- +spectively for the two data augmentation techniques to learn +robust identity representations. Extensive experiments are +conducted to validate the superiority of the proposed GAN- +based augmentation over traditional augmentation tech- +niques for contrastive representation learning. The genera- +tive module benefits from learned robust identity represen- +tations that preserve fine-grained identity information for +better generation quality. GCL+ outperforms state-of-the-art +methods under both, fully unsupervised and unsupervised +domain adaptation settings. Moreover, our contrastive mod- +ule can be regarded as a contrastive discriminator in a GAN, +which provides a new unsupervised approach for identity- +preserving person image generation. +ACKNOWLEDGMENTS +This work has been supported by the French government, +through the 3IA Cˆote d’Azur Investments in the Future +project managed by the National Research Agency (ANR) +with the reference number ANR-19-P3IA-0002. The authors +are grateful to the OPAL infrastructure from Universit´e Cˆote +d’Azur for providing resources and support. +REFERENCES +[1] +M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. H. Hoi, “Deep +learning for person re-identification: A survey and outlook,” IEEE +TPAMI, 2021. +[2] +S. Karanam, M. Gou, Z. Wu, A. Rates-Borras, O. Camps, and +R. Radke, “A systematic evaluation and benchmark for person re- +identification: Features, metrics, and datasets,” IEEE TPAMI, 2019. +[3] +Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part +models: Person retrieval with refined part pooling (and a strong +convolutional baseline),” in ECCV, 2018. +[4] +H. Chen, B. Lagadec, and F. Bremond, “Learning discriminative +and generalizable representations by spatial-channel partition for +person re-identification,” in WACV, 2020. +[5] +J. Song, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Gen- +eralizable person re-identification by domain-invariant mapping +network,” in CVPR, June 2019. +[6] +X. Jin, C. Lan, W. Zeng, Z. Chen, and L. Zhang, “Style normaliza- +tion and restitution for generalizable person re-identification,” in +CVPR, June 2020. +[7] +Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Invariance matters: +Exemplar memory for domain adaptive person re-identification,” +in CVPR, 2019. +[8] +Y. Ge, D. Chen, and H. Li, “Mutual mean-teaching: Pseudo la- +bel refinery for unsupervised domain adaptation on person re- +identification,” in ICLR, 2020. +[9] +H. Chen, Y. Wang, B. Lagadec, A. Dantcheva, and F. Bremond, +“Joint generative and contrastive learning for unsupervised per- +son re-identification,” in CVPR, 2021. +[10] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple +framework for contrastive learning of visual representations,” in +ICML, 2020. +[11] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast +for unsupervised visual representation learning,” in CVPR, 2020. +[12] Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang, “Random erasing +data augmentation,” in AAAI, 2020. +[13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, +S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial +nets,” in NeurIPS, 2014. +[14] L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer gan to +bridge domain gap for person re-identification,” in CVPR, 2018. +[15] S. Bak, P. Carr, and J.-F. Lalonde, “Domain adaptation through +synthesis for unsupervised person re-identification,” in ECCV, +2018. +[16] Z. Zhong, L. Zheng, S. Li, and Y. Yang, “Generalizing a person +retrieval model hetero- and homogeneously,” in ECCV, 2018. +[17] Y. Zou, X. Yang, Z. Yu, B. V. K. V. Kumar, and J. Kautz, +“Joint disentangling and adaptation for cross-domain person re- +identification,” in ECCV, 2020. +[18] X. Huang and S. Belongie, “Arbitrary style transfer in real-time +with adaptive instance normalization,” in ICCV, 2017. +[19] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image +translation with conditional adversarial networks,” in CVPR, 2017. +[20] Y. Ge, Z. Li, H. Zhao, G. Yin, S. Yi, X. Wang, and H. Li, “Fd- +gan: Pose-guided feature distilling gan for robust person re- +identification,” in NeurIPS, 2018. +[21] Y.-J. Li, C.-S. Lin, Y.-B. Lin, and Y.-C. F. Wang, “Cross-dataset +person re-identification via unsupervised pose disentanglement +and adaptation,” in ICCV, 2019. +[22] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person +2d pose estimation using part affinity fields,” in CVPR, 2017. +[23] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik, “End-to-end +recovery of human shape and pose,” in CVPR, 2018. +[24] Z. Zhong, L. Zheng, Z. Zheng, S. Li, and Y. Yang, “Camera style +adaptation for person re-identification,” in CVPR, 2018. +[25] Z. Zheng, X. Yang, Z. Yu, L. Zheng, Y. Yang, and J. Kautz, +“Joint discriminative and generative learning for person re- +identification,” in CVPR, 2019. +[26] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: +Beyond empirical risk minimization,” in ICLR, 2018. +[27] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez- +Paz, and Y. Bengio, “Manifold mixup: Better representations by +interpolating hidden states,” in ICML, 2019. +[28] C. Beckham, S. Honari, V. Verma, A. M. Lamb, F. Ghadiri, R. D. +Hjelm, Y. Bengio, and C. Pal, “On adversarial mixup resynthesis,” +NeurIPS, 2019. +[29] R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction +by learning an invariant mapping,” in CVPR, 2006. +[30] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature +learning via non-parametric instance discrimination,” in CVPR, +2018. +[31] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, +“Unsupervised learning of visual features by contrasting cluster +assignments,” in NeurIPS, 2020. +[32] J.-B. Grill, F. Strub, F. Altch´e, C. Tallec, P. H. Richemond, +E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar +et al., “Bootstrap your own latent: A new approach to self- +supervised learning,” in NeurIPS, 2020. +[33] X. Chen and K. He, “Exploring simple siamese representation +learning,” in CVPR, 2021. +[34] X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with +momentum contrastive learning,” arXiv preprint arXiv:2003.04297, +2020. +[35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, +Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and +L. Fei-Fei, “Imagenet large scale visual recognition challenge,” +IJCV, 2015. +[36] Z. Zheng, L. Zheng, and Y. Yang, “Unlabeled samples generated +by gan improve the person re-identification baseline in vitro,” in +ICCV, 2017. +[37] A. Radford, L. Metz, and S. Chintala, “Unsupervised represen- +tation learning with deep convolutional generative adversarial +networks,” in ICLR, 2016. +[38] X. Qian, Y. Fu, T. Xiang, W. Wang, J. Qiu, Y. Wu, Y.-G. Jiang, +and X. Xue, “Pose-normalized image generation for person re- +identification,” in ECCV, 2018. + +14 +[39] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to- +image translation using cycle-consistent adversarial networks,” in +CVPR, 2017. +[40] Y. Huang, Q. Wu, J. Xu, and Y. Zhong, “Sbsgan: Suppression +of inter-domain background shift for person re-identification,” in +ICCV, 2019. +[41] Y. Chen, X. Zhu, and S. Gong, “Instance-guided context rendering +for cross-domain person re-identification,” in ICCV, 2019. +[42] C. Eom and B. Ham, “Learning disentangled representation for +robust person re-identification,” in NeurIPS, 2019. +[43] Y. Tokozume, Y. Ushiku, and T. Harada, “Between-class learning +for image classification,” in CVPR, 2018. +[44] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: +Regularization strategy to train strong classifiers with localizable +features,” in ICCV, 2019. +[45] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, +and C. Raffel, “Mixmatch: A holistic approach to semi-supervised +learning,” in NeurIPS, 2019. +[46] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, +H. Zhang, and C. Raffel, “Remixmatch: Semi-supervised learn- +ing with distribution matching and augmentation anchoring,” in +ICLR, 2020. +[47] M. Xu, J. Zhang, B. Ni, T. Li, C. Wang, Q. Tian, and W. Zhang, +“Adversarial domain adaptation with domain mixup,” in AAAI, +2020. +[48] Z. Zhong, L. Zhu, Z. Luo, S. Li, Y. Yang, and N. Sebe, “Openmix: +Reviving known knowledge for discovering novel visual cate- +gories in an open world,” in CVPR, 2021. +[49] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Lak- +shminarayanan, “Augmix: A simple data processing method to +improve robustness and uncertainty,” in ICLR, 2020. +[50] C. Luo, C. Song, and Z. Zhang, “Generalizing person re- +identification by camera-aware invariance learning and cross- +domain mixup,” in ECCV, 2020. +[51] J. Wang, X. Zhu, S. Gong, and W. Li, “Transferable joint attribute- +identity deep learning for unsupervised person re-identification,” +CVPR, 2018. +[52] S. Lin, H. Li, C.-T. Li, and A. C. Kot, “Multi-task mid-level +feature alignment network for unsupervised cross-dataset person +re-identification,” in BMVC, 2018. +[53] H.-X. Yu, W. Zheng, A. Wu, X. Guo, S. Gong, and J. Lai, “Unsu- +pervised person re-identification by soft multilabel learning,” in +CVPR, 2019. +[54] Y. Fu, Y. Wei, G. Wang, Y. Zhou, H. Shi, and T. S. Huang, +“Self-similarity grouping: A simple unsupervised cross domain +adaptation approach for person re-identification,” in ICCV, 2019. +[55] F. Yang, K. Li, Z. Zhong, Z. Luo, X. Sun, H. Cheng, X. Guo, +F. Huang, R. Ji, and S. Li, “Asymmetric co-teaching for unsuper- +vised cross-domain person re-identification.” in AAAI, 2020. +[56] Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Learning to adapt +invariance in memory for person re-identification,” IEEE TPAMI, +2020. +[57] Y. Lin, X. Dong, L. Zheng, Y. Yan, and Y. Yang, “A bottom-up +clustering approach to unsupervised person re-identification,” in +AAAI, 2019. +[58] Y. Lin, L. Xie, Y. Wu, C. Yan, and Q. Tian, “Unsupervised person +re-identification via softened similarity learning,” in CVPR, 2020. +[59] D. Wang and S. Zhang, “Unsupervised person re-identification via +multi-label classification,” in CVPR, 2020. +[60] J. Li and S. Zhang, “Joint visual and temporal consistency for +unsupervised domain adaptive person re-identification,” in ECCV, +2020. +[61] G. Wu, X. Zhu, and S. Gong, “Tracklet self-supervised learning for +unsupervised person re-identification.” in AAAI, 2020. +[62] Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re- +identification with k-reciprocal encoding,” in CVPR, 2017. +[63] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based +algorithm for discovering clusters in large spatial databases with +noise,” in KDD, 1996. +[64] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable +person re-identification: A benchmark,” ICCV, 2015. +[65] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Per- +formance measures and a data set for multi-target, multi-camera +tracking,” in ECCVW, 2016. +[66] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and +Q. Tian, “Mars: A video benchmark for large-scale person re- +identification,” in ECCV, 2016. +[67] Y. Wu, Y. Lin, X. Dong, Y. Yan, W. Ouyang, and Y. Yang, “Ex- +ploit the unknown gradually: One-shot video-based person re- +identification by stepwise learning,” CVPR, 2018. +[68] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for +image recognition,” in CVPR, 2016. +[69] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, +T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, +A. K¨opf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, +B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative +style, high-performance deep learning library,” in NeurIPS, 2019. +[70] F. Yang, Z. Zhong, Z. Luo, Y. Cai, S. Li, and S. Nicu, “Joint noise- +tolerant learning and meta camera shift adaptation for unsuper- +vised person re-identification,” in CVPR, 2021. +[71] A. Strehl and J. Ghosh, “Cluster ensembles — a knowledge reuse +framework for combining multiple partitions,” JMLR, 2002. +[72] M. Ye, X. Lan, and P. C. Yuen, “Robust anchor embedding for +unsupervised video person re-identification in the wild,” in ECCV, +2018. +[73] Y. Chen, X. Zhu, and S. Gong, “Deep association learning for +unsupervised video person re-identification,” in BMVC, 2018. +[74] M. +Li, +X. +Zhu, +and +S. +Gong, +“Unsupervised +person +re- +identification by deep learning tracklet association,” in ECCV, +2018. +[75] ——, “Unsupervised tracklet person re-identification,” IEEE trans- +actions on pattern analysis and machine intelligence, 2019. +[76] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, “Joint detection and +identification feature learning for person search,” CVPR, 2017. +[77] J. Y. Zhang, S. Pepose, H. Joo, D. Ramanan, J. Malik, and +A. Kanazawa, “Perceiving 3d human-object spatial arrangements +from a single image in the wild,” in ECCV, 2020. +Hao Chen received the B.S. degree from Wuhan +University in 2014, and the M.S. degree from +CentraleSup´elec and Universit´e Paris Saclay in +2017. He is currently working towards his Ph.D. +at Inria Sophia Antipolis and Universit´e Cˆote +d’Azur. His research interests include person re- +identification and unsupervised learning. Home- +page: https://chenhao2345.github.io/. +Yaohui Wang received the B.S. degree from +Xidian University in 2015, and the M.S. de- +gree from ENSIIE and Universit´e Paris Saclay +in 2017. He is currently working towards his +Ph.D. at Inria Sophia Antipolis, STARS Team +and Universit´e Cˆote d’Azur. His current research +focuses on image and video synthesis, activity +recognition and representation learning. +Benoit Lagadec is a Research Engineer at Eu- +ropean Systems Integration. He currently works +on developing video analysis solutions based +on abnormal human behavior. Previously, he +worked in public research at Ifremer, where he +was able to develop image processing algo- +rithms adapted to the difficulty of underwater +imaging : denoising, segmentation. + +二15 +Antitza Dantcheva is a Research Scientist +(CRCN) with the STARS team of INRIA Sophia +Antipolis, France. Previously, she was a Marie +Curie fellow at Inria and a Postdoctoral Fellow +at the Michigan State University and the West +Virginia University, USA. She received her Ph.D. +degree from T´el´ecom ParisTech/Eurecom in im- +age processing and biometrics in 2011. Her re- +search is in computer vision and specifically in +designing algorithms that seek to learn suitable +representations of the human face in interpreta- +tion and generation. +Francois Bremond received the PhD degree +from INRIA in video understanding in 1997, and +he pursued his research work as a post doc- +torate at the University of Southern California +(USC) on the interpretation of videos taken from +Unmanned Airborne Vehicle (UAV). In 2007, he +received the HDR degree (Habilitation a Diriger +des Recherches) from Nice University on Scene +Understanding. He created the STARS team on +the 1st of January 2012. He is the research +director at INRIA Sophia Antipolis, France. He +has conducted research work in video understanding since 1993 at +Sophia- Antipolis. He is author or co-author of more than 140 scien- +tific papers published in international journals or conferences in video +understanding. He is a handling editor for MVA and a reviewer for +several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip, +JASP) and conferences (CVPR, ICCV, AVSS, VS, ICVS). He has (co- +)supervised 26 PhD theses. He is an EC INFSO and French ANR Expert +for reviewing projects. + diff --git a/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..9067fd6e7b0917176a9a3d2fd3d032855c48ec1d --- /dev/null +++ b/zNAyT4oBgHgl3EQf0vni/content/tmp_files/load_file.txt @@ -0,0 +1,1852 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf,len=1851 +page_content='1 Learning Invariance from Generated Variance for Unsupervised Person Re-identification Hao Chen, Yaohui Wang, Benoit Lagadec, Antitza Dantcheva, Francois Bremond Abstract—This work focuses on unsupervised representation learning in person re-identification (ReID).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Recent self-supervised contrastive learning methods learn invariance by maximizing the representation similarity between two augmented views of a same image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, traditional data augmentation may bring to the fore undesirable distortions on identity features, which is not always favorable in id-sensitive ReID tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In this paper, we propose to replace traditional data augmentation with a generative adversarial network (GAN) that is targeted to generate augmented views for contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A 3D mesh guided person image generator is proposed to disentangle a person image into id-related and id-unrelated features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Deviating from previous GAN-based ReID methods that only work in id-unrelated space (pose and camera style), we conduct GAN-based augmentation on both id-unrelated and id-related features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We further propose specific contrastive losses to help our network learn invariance from id-unrelated and id-related augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By jointly training the generative and the contrastive modules, our method achieves new state-of-the-art unsupervised person ReID performance on mainstream large-scale benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Index Terms—Person re-identification, image synthesis, representation disentanglement, data augmentation, contrastive learning !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1 INTRODUCTION G IVEN an image of a target person, a person re- identification (ReID) system [1], [2] aims at matching images of the same person across non-overlapping cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' With the help of human-annotated labels, supervised per- son ReID methods [3], [4] have yielded impressive results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, there usually exist strong domain gaps between different domains, such as illumination condition, camera property and scenario variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As shown in previous methods [5], [6], a ReID model trained on a specific domain is hard to generalize to other domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' One straightforward solution is to annotate and re-train the ReID model in a new domain, which is cumbersome and time-consuming for real- world deployments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Towards an automatic adaptive system, unsupervised person ReID [7], [8], [9] has attracted increasing attention in the research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Compared with su- pervised counterparts, unsupervised methods directly learn from unlabeled images and therefore entail better scalability in real-world deployments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Recent self-supervised contrastive learning studies [10], [11] have shown promising performance in unsupervised repre- sentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By maximizing the representation sim- ilarity between two different views (augmented versions) of a same image, contrastive methods learn representations that are invariant to different conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In this context, data augmentation plays a crucial role in mimicking real-world condition variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Contrastive learning methods are able to build more robust representations, given they were pro- vided with better augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Previous methods gen- erally consider traditional data augmentation techniques, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Dantcheva and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bremond are with Inria and Universit´e Cˆote d’Azur, 2004 Route des Lucioles, 06902 Val- bonne, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' E-mail: {hao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='chen, yaohui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='wang, antitza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='dantcheva, fran- cois.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='bremond}@inria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='fr B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lagadec is with European Systems Integration, 362 Avenue du Cam- pon, 06110 Le Cannet, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' E-mail: benoit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='lagadec@esifrance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='net e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', random flipping, cropping, color jittering, blurring and erasing [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, these random augmentation tech- niques may cause undesirable distortion to crucial identity information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To overcome this issue, we propose to use a Generative Adversarial Network (GAN) [13] as an augmen- tation substitute, as it is able to disentangle a representation into id-related and id-unrelated features (see Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' More accurate augmented views can be obtained by modifying a certain factor while preserving other factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Previous GAN-based unsupervised ReID methods [14], [15], [16], [17] often treat unsupervised ReID as an unsu- pervised domain adaptation task, which attempts to adapt a model trained on a labeled source domain to an unla- beled target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Under this setting, it is intuitive to use GAN-based style transfer [18], [19] to generate source domain images in the style of a target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A model can be re-trained on the generated images in target domain style with source domain labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, unsupervised domain adaptation performance often strongly relies on quality and scale of the source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Differently, we treat unsupervised ReID as a contrastive representation learning task, where the source domain is not mandatory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To this end, we integrate a generative module and a contrastive module into a joint learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the generative module, we propose a 3D mesh based generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Conventional pose transfer methods [20], [21] use 2D pose [22] to guide generation, not preserving body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3D mesh recovery [23] jointly estimates body shape, as well as 3D pose, which conserves more identity information for unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We use 3D meshes to guide the generation, where generated images in new poses are then used as augmented views in the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the contrastive module, we use a clustering al- gorithm to generate pseudo labels, aimed at maximizing representation similarity between different views of a same arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='00725v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='CV] 2 Jan 2023 2 TABLE 1 Id-related and Id-unrelated factors in a person image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Id-related Id-unrelated cloth color, pose, view-point, hair color, texture, illumination, camera style body shape background pseudo identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our model attracts a generated view to its original view, while repulsing the generated view from images of different identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The contrastive module per- mits an identity encoder to extract view-invariant identity features, which, in turn, improves the generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In our previous work [9], GAN-based augmentation was only conducted on id-unrelated features, which has been common practice in previous GAN-based ReID meth- ods [20], [24], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Modifying id-unrelated features allows for learning identity features that are more invariant to id- unrelated variations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In this paper, we explore the possibility of conducting GAN-based augmentation on the id-related features to further improve the ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Inspired by Mixup [26] that interpolates two images to learn a smoother decision boundary between two classes, we pro- pose to interpolate disentangled id-related features inside the generative module, namely Disentangled Mixup (D- Mixup).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As shown in Table 2, if two persons P1 and P2 re- spectively wear red and yellow clothes, an in-between iden- tity in orange clothes should be marked as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, in a dataset, such a person in orange clothes is normally labeled as a totally different identity P3, which hinders a network from learning the accurate relationship between different identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Compared to traditional image- level Mixup [26] and feature-level Mixup [27], our proposed D-Mixup generates more accurate in-between identity im- ages, which are more suitable for fine-grained person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In our D-Mixup, we try to make our network understand the mixed identity 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2 is not related to id-unrelated features (pose and view-point), but only related to id-related features (cloth color).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To summarize, our contributions include the following: We propose a 3D mesh guided generator to disentan- gle representations into id-related and id-unrelated features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Two novel data augmentation techniques are proposed respectively on id-unrelated and id- related features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We propose Rotation Contrast and Mixup Contrast modules to respectively learn invariance from id- unrelated and id-related augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We propose an enhanced joint generative and con- trastive learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We comprehensively investigate how the generative and contrastive mod- ules mutually promote each other and contribute to unsupervised ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Extensive experiments validate the superiority of proposed GAN-based augmentation over traditional augmentation for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our method achieves new state-of-the-art unsupervised person ReID performance on mainstream image- based datasets, including Market-1501, DukeMTMC- reID and MSMT17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' TABLE 2 Interpolation results between two random persons P1 and P2 with image-level Mixup [26], feature-level Mixup (F-Mixup) [27] and our proposed disentangled Mixup (D-Mixup).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To visualize results from F-Mixup, we follow AMR [28] to train a VAE-GAN for mixed image reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our D-Mixup only interpolates disentangled identity features in the generation, which alleviates noise from mixed structural features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Inputs Mixup F-Mixup D-Mixup Image Image Label 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0P2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5P2 Our method can be also applied to video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our method significantly outperforms previous unsupervised video person ReID methods on MARS and DukeMTMC-VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2 RELATED WORK 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Contrastive learning Contrastive learning [29] has shown impressive perfor- mance for un-/self-supervised representation learning [10], [11], [30], [31], [32], [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Such contrastive methods target at learning representations that are invariant to different distortions by attracting positive pairs, while repulsing neg- ative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For each image, a positive pair can be constituted by two augmented views, whereas all other images in a dataset are regarded as negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Contrastive learn- ing methods benefit from a set of well defined data aug- mentation techniques, which can mimic real-world image distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For example, MoCo [11] used random cropping, color jitterring, horizontal flipping and grayscale conversion to obtain positive view pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As an extension, MoCo- v2 [34] included blurring and stronger color distorsion, which enhanced the original method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, most of data augmentation settings in contrastive learning methods were designed for general image classification datasets, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', ImageNet [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' These traditional augmentation techniques are not always suitable for color-sensitive person ReID, especially those that introduce strong color distorsion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Data augmentation As a technique to constitute positive pairs, data augmen- tation plays an important role in contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Re- cently, GAN and Mixup have provided new approaches for data augmentation in person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 GAN-based augmentaion Zheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [36] unconditionally generated a lot of un- labeled person images with DCGAN [37] to enlarge data 3 volume for supervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Following GAN-based meth- ods were usually conditionally conducted on some factors from Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1) Pose: With the guidance of 2D poses, FD-GAN [20] and PN-GAN [38] generated a target per- son in new poses to learn pose-irrelevant representations for single-domain supervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Similar pose transfer [21] was then proposed to address unsupervised domain adaptive (UDA) ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2) Dataset style (illumination): As a dataset is usually recorded in a uniform illumination condi- tion, PTGAN [14] and SyRI [15] used CycleGAN [39] to min- imize the domain gap between different datasets by generat- ing person images in the style of a target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3) Camera style: Instead of the general dataset style, CamStyle [24] transferred images captured from one camera into the style of another camera, in order to reduce inter-camera style gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Similar method [16] was then applied to UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4) Background: SBSGAN [40] and CR-GAN [41] respectively were targeted at removing and switching the background of a person image to mitigate background influence for UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 5) General structure: By switching global and local level identity-unrelated features, IS-GAN [42] disentangled a representation into identity-related and identity-unrelated features without any concrete guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As a concrete guid- ance, a gray-scaled image contains multiple id-unrelated factors of a person image, including pose, background and carrying structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By recoloring gray-scaled person images with the color distribution of other images, DG- Net [25] and DG-Net++ [17] learned disentangled identity representations invariant to structure factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed 3D mesh guided generator shares certain similarity with pose transfers and DG-Net++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, both pose transfers and DG-Net++ lose body shape information, which can be conserved by 3D meshes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Moreover, as opposed to DG- Net++, we do not transfer style in a cross-domain manner, which allows our method to operate without a source do- main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Mixup Mixup [26] is a simple yet effective data augmentation technique that interpolates two samples and labels into one new in-between sample, which encourages a smoother decision boundary between two classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The interpolation can be conducted between two images [26], [43], two feature representations [27] and two portions of different images [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Initially proposed for supervised image classi- fication [26], [43], Mixup has been successfully extended to semi-supervised learning [45], [46], unsupervised domain adaptation [47], as well as novel class discovery [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Aug- Mix [49] combines multiple augmented versions of an image into a mixed image and proves that such technique can enhance robustness on corrupted data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' CAIL [50] applies image-level Mixup between a source domain image and a target domain image to create a between-domain person image, which facilitates cross-domain knowledge transfer in unsupervised domain adaptive ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The above methods usually interpolate whole images or whole representations, resulting in noise from overlapping person structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To reduce noise from mixed person structures, we propose to interpolate only disentangled identity features, which is compatible with our proposed 3D mesh guided GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Unsupervised person ReID Depending on the necessity of a large-scale labeled source dataset, unsupervised person ReID methods can be roughly categorized into unsupervised domain adaptive (UDA) and fully unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We note that above mentioned GAN-based unsupervised ReID methods [14], [15], [16], [17], [21], [41] fall into the setting of UDA ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Several works [51], [52] leveraged semantic attributes to facilitate the domain adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Another prominent approach has to do with assigning pseudo labels to unlabeled images and conducting pseudo label learning [7], [8], [50], [53], [54], [55], [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Pseudo labels can be obtained by existing clus- tering algorithms, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', K-means [8] and DBSCAN [17], [55], or newly designed pseudo labelling algorithms [53], [56].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Since the performance of UDA ReID is highly correlated to the scale and quality of a source domain, recent fully unsupervised ReID methods have attracted more attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Most of previous fully unsupervised methods [57], [58], [59], [60], [61] were based on pure pseudo label learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our previous method GCL [9] has entailed a hybrid GAN and pseudo label learning method, which is compatible with both UDA and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We here propose a new id-related augmentation D-Mixup, which enhances our framework to achieve new state-of-the-art performance under both UDA and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3 METHOD In this paper, we propose an enhanced joint Generative and Contrastive Learning (GCL+) for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We define unsupervised ReID as a problem of learn- ing invariance from self-augmented variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (a), the proposed GCL+ constitutes of two modules: a generative module that provides GAN-based augmented views, as well as a contrastive module that learns invariance from augmented views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' These two modules are coupled by a shared identity encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' After the joint training, only the shared identity encoder is conserved for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In the following sections, we proceed to provide details related to both modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To facilitate the reading, we include a list of abbreviations in Supplementary Materials Section C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Generative Module Our generative module is composed of 4 networks, in- cluding an identity encoder Eid, a structure encoder Estr, a decoder G and a discriminator D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Given an unlabeled person ReID dataset X = {x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', xN}, we use the prominent algorithm HMR [23] to generate corresponding 3D meshes, which are then used as structure guidance in the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By recoloring a specific 3D mesh to reconstruct a real image, a person representation can be disentangled into identity and structure features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We conduct data augmentation in two pathways: one on id- unrelated structure features with rotated meshes, the other one on identity features with D-Mixup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Mesh-guided Rotation (id-unrelated augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (b), given a person image and an estimated 3D mesh, we denote the 2D projection of the mesh as original structure sori.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To mimic real-world camera ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='(a) General Architecture of GCL ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Contrastive ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='(b) Generative Module: ID-unrelated augmentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Generative ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑠𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑠𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑗 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='(c) Generative Module: ID-related augmentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Discriminator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Shared identity encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Structure encoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Decoder ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Mixup ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑠𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑠𝑡𝑟 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑠𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑓𝑒𝑎𝑡 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑖𝑚𝑔 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐺 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑜𝑟𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐷 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑖𝑚𝑔 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑎𝑑𝑣 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑓𝑒𝑎𝑡 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='(d) Contrastive Module: Rotation Contrast ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑛𝑒𝑤 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='memory ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑓𝑝𝑜𝑠 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑣𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐸𝑖𝑑 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑖 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑗 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='(e) Contrastive Module: Mixup Contrast ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='mix ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Pseudo label ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝐿𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 2 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='𝑥𝑚𝑖𝑥 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='′ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (a) General architecture of GCL+: The framework is composed of a generative module (b, c) and a contrastive module (d, e), which are coupled by the shared identity encoder Eid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (b) Mesh rotation (id-unrelated augmentation) : The decoder G combines the identity features encoded by Eid and structure features Estr to generate an augmented view x′ new with a cycle consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (c) D-mixup (id-related augmentation): The decoder G generates a identity-mixed augmented view x′ mix with the mixed identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (d) Rotation Contrast: Viewpoint-invariance is enhanced by maximizing the agreement between original Eid(x), synthesized Eid(x′ new) and memory fpos representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (e) Mixup Contrast: A smoother decision boundary can be learnt with x′ mix and the interpolated pseudo label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' view-point, as shown in Table 3, we rotate the 3D mesh by 45°, 90°, 135°, 180°, 225°, 270° and 315° and randomly take one 2D projection from these rotated meshes as a new structure snew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The unlabeled image is encoded to identity features by the identity encoder Eid : x → fid, while both original and new structures are encoded to structure features by the structure encoder Estr : sori → fstr(ori), snew → fstr(new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Combining both identity and structure features, the decoder generates synthesized im- ages G : (fid, fstr(ori)) → x′ ori, (fid, fstr(new)) → x′ new, where a prime is used to represent generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As we do not have real images in new structures (paired data), a cycle consistency reconstruction [39] becomes in- dispensable for the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We encode the generated image in the new structure x′ new and decode once again to get synthesized images in original structures G(Eid(x′ new), sori) → x′′ ori, where double primes denote cycle-generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We calculate a ℓ1 image reconstruc- tion loss between the original image x, the generated image x′ ori and the cycle-generated image: Limg = E[∥x − x′ ori∥1] + E[∥x − x′′ ori∥1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (1) To enhance the disentanglement in the cycle consistency reconstruction, we also calculate a ℓ1 feature reconstruction loss: Lfeat = E[∥fid − Eid(x′ new)∥1]+ E[∥fid − Eid(x′′ ori)∥1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (2) The discriminator D attempts to distinguish between real and generated images with adversarial losses: Ladv = E[log D(x) + log(1 − D(x′ ori))]+ E[log D(x) + log(1 − D(x′ new))]+ E[log D(x) + log(1 − D(x′′ ori))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (3) Remark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2, we can switch 2D gray images [17], [25], switch meshes between random persons or rotate one’s own mesh to introduce new structures as generation guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Although stronger pose and view- point variances can be introduced into generation, random 5 TABLE 3 Examples of 3D mesh guided generation on Market-1501 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Each mesh is rotated by 45°, 90°, 135°, 180°, 225°, 270° and 315°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → → → switching hinders conservation of body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' After testing, we find that the most appropriate way to preserve body shape and generate accurate images is Mesh rotation, which yields higher performance in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 D-mixup (id-related augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (c), given two random person images xi and xj in a mini-batch, we encode the images into identity features Eid(xi) → fid(i) and Eid(xj) → fid(j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We follow the original Mixup [26] in using a Beta distribution with a hyper-parameter α to randomly sample a mixing coefficient λ: λ = Beta(α, α), λ∗ = max(λ, 1 − λ) fid(mix) = λ∗ · fid(i) + (1 − λ∗) · fid(j), (4) where λ∗ renders the mixed identity more similar to xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To conserve corresponding body shape information, we use the original structure of xi, rather than xj as the gener- ation guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A mixed person image (see more inter- polated examples in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3) can be generated by combin- ing mixed identity features and original structure features G(fid(mix), sori(i)) → x′ mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The discriminator D attempts to distinguish between real and mixed images with the adversarial loss: Ladv mix = E[log D(x) + log(1 − D(x′ mix))].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (5) More discussion about feature regularization losses is provided in Supplementary Materials Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Overall generative loss The overall GAN loss combines the above losses (1), (2), (3) and (5) with a weighting coefficient λrecon: Lgan = λrecon(Limg + Lfeat) + Ladv + Ladv mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (6) Mesh switch Mesh rotation 2D gray image switch Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Different ways of introducing structural variance (2D gray image switch [25], Mesh switch and Mesh rotation) into generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' TABLE 4 Performance comparison of rotating one mesh and switching two random meshes in the generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Method Duke→Market Market→Duke mAP Rank1 mAP Rank1 2D gray image switch [25] 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Mesh switch 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 Mesh rotation 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Contrastive Module The described generative module generates augmented views of a person image, which can form positive view pairs for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By maximizing similarity be- tween positive pairs, the shared identity encoder is aimed at building robust representations that are invariant to distor- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For one identity, there are commonly several positive images in the dataset, which are recorded in different poses, camera styles and backgrounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Only maximizing similarity between an image and its self-augmented views leads to sub-optimal performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Moreover, previous methods [10], [11] have demonstrated the effectiveness of mining a large number of negative samples in contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In order to mine more positives and a large number of negatives, we generate pseudo labels on a memory bank [30] that stores all representations M corresponding to dataset images X .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Given a representation f t in the current epoch, the corresponding memory bank representation M[i] is updated with a momentum hyper-parameter β: M[i]t = β · M[i]t−1 + (1 − β) · f t, (7) where M[i]t and M[i]t−1 respectively refer to the memory bank representations in the t and t − 1 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The mem- ory bank stores moving averaged representations, which stabilize the pseudo label generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To further enhance the pseudo label quality, we compute k-reciprocal re-ranked Jaccard distance [62] between memory bank representations, which are then fed into a clustering algorithm DBSCAN [63] to generate pseudo labels Y = {y1, y2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', yN}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' During the training, the pseudo labels are renewed at the beginning of each epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We design a Rotation Contrast and a Mixup Contrast respectively for the two types of generated views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Rotation Contrast (for id-unrelated augmentation) As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (d), the original image x and the generated image x′ new are encoded by the shared identity encoder into two identity feature vectors Eid(x) → f and Eid(x′ new) → f ′ new.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For a representation f with a pseudo label yi, we randomly sample a positive representation fpos 6 𝑃1 𝑃2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Linear interpolation of disentangled identity features between two persons respectively from Market-1501 and DukeMTMC-reID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' of the same pseudo label yi and K negative representations of pseudo labels different to yi from the memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Three positive pairs can be formed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', (f, fpos), (f, f ′ new) and (fpos, f ′ new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The f ′ new and sampled K negative rep- resentations from the memory bank are used to form K negative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We define three view-invariant losses to attract three positive pairs while repulsing K negative pairs: Lvi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f · fpos > /τ) )], (8) L′ vi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f ′new · f > /τ) )], (9) L′′ vi = E[log (1 + �K i=1 exp (< f ′ new · ki > /τ) exp (< f ′new · fpos > /τ) )], (10) where < · > denotes the cosine similarity between two feature vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' τ is a temperature hyper-parameter to sharpen the cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ki denotes negative represen- tations sampled from the memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Presented three loss functions enable the contrastive module to maximize the similarity between original view f, generated view f ′ new and positive memory view fpos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' At the same time, the similarity between generated view f ′ new and K negative memory views is minimized, which encourages the genera- tive module to refine the generated view f ′ new that should be different from a large number of negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Mixup Contrast (for id-related augmentation) The mixed image x′ mix is encoded by the shared identity encoder into a mixed identity feature vector Eid(x′ mix) → f ′ mix, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Towards learning a smoother decision boundary between two clusters, as illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4, we design a Mixup Contrast for f ′ mix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As certain instances in a cluster are close to the decision boundary between two prototype 𝑷𝟏 𝑷𝟐 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6𝑃1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4𝑃2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4𝑃1 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6𝑃2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Mixup Contrast targets at learning a smoother decision boundary between two persons P1 and P2 by contrasting in-between samples with in-between prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' clusters, whereas the others are far away, we define an averaged prototype for a cluster: pa = 1 Na � M[i]∈ya M[i], (11) where Na is the number of instances belonging to the cluster a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Given a random image representation f, we use a soft- max cross-entropy loss Lproto to make f converge to the cluster prototype, which encourages the compactness of a cluster: Lproto = E[log (1 + �|Y|−1 i=1 exp (f · pi) exp (f · p+) )], (12) where p+ is the corresponding prototype of f and pi denotes other cluster prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' |Y| is the number of clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Given that certain clusters may contain more instances that are close to decision boundaries with other clusters, compact clusters provide stable mixed prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Based on the pseudo labels, we define a mixed prototype vector between two clusters i and j: pmix = λ∗ · pi + (1 − λ∗) · pj, (13) where λ∗ is the same mixing coefficient as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the mixed representation f ′ mix, we use another soft- max cross-entropy loss to maximize its similarity with the mixed prototype pmix and minimize its similarity with |Y| − 2 negative prototypes that do not belong to the two clusters i and j: Lmix = E[log (1 + �|Y|−2 i=1 exp (f ′ mix · pi) exp (f ′ mix · pmix) )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (14) As opposed to cosine similarity in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (8), (9) and (10), we do not compute normalized similarity, as the average operation for computing prototype vectors performs as normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Overall contrastive loss The overall contrastive loss combines the above losses (8), (9), (10), (12) and (14): Lcontrast = λvi(Lvi+L′ vi+L′′ vi)+λmix(Lproto+Lmix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (15) 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Joint Training Our proposed framework incorporates a generative module and a contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The generative module disentan- gles a person image representation into identity and struc- ture features, which allows for learning purified identity features for person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The contrastive module learns invariance via contrasting augmented images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' If we replace the GAN-based augmentation with traditional data aug- mentation techniques, both modules can be trained sepa- rately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, a separate training leads to sub-optimal performance for both of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To address this issue, we couple the two modules with a shared identity encoder in a joint training framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In the setting of joint training, both modules work collaboratively to achieve one objective: en- hancing the discriminality of identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Inside GCL+, the generative module provides both, id-unrelated and id-related augmentations for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' On the other hand, the contrastive module maximizes the similarity between positive views, while repulsing negative views, which, in turn, refines the identity representations for a better generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Both modules mutually promote each other’s performance in the joint training, leading to an optimal ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In our proposed framework, a forward propagation is firstly conducted on the generative module and subsequently on the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A backward propagation is then conducted with an overall loss that combines Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (6) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (15): Loverall = Lgan + Lcontrast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (16) 4 EXPERIMENT 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Datasets and Evaluation Protocols We evaluate our proposed method GCL+ on five main- stream person ReID benchmarks, including three image- based datasets: Market-1501 [64], DukeMTMC-reID [65], MSMT17 [14] and two video-based datasets: MARS [66] and DukeMTMC-VideoReID [67].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Market-1501 dataset is collected in front of a supermarket in Tsinghua University from 6 cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' It is composed of 12,936 images of 751 identities for training and 19,732 images of 750 identities for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DukeMTMC-reID is collected from 8 cameras in- stalled in the campus of Duke University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' It contains 16,522 images of 702 persons for training, 2,228 query images and 17,661 gallery images of 702 persons for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' MSMT17 is a large-scale Re-ID dataset, which includes 32,621 training images of 1,041 identities and 93,820 testing images of 3,060 identities collected from 15 cameras deployed in both indoor and outdoor scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' MARS is a large-scale video-based person ReID dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The dataset contains 17,503 tracklets of 1,261 identities collected from 6 cameras, where 625 iden- tities are used for training and the other 636 identities are used for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DukeMTMC-VideoReID is a video-based person ReID dataset derived from DukeMTMC [65] dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DukeMTMC-VideoReID contains 2,196 training tracklets of 702 identities and 2,636 testing tracklets of other 702 identi- ties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As our method includes a GAN and a contrastive module, we report results for both unsupervised person ReID and generation quality evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For unsupervised person ReID evaluation, we provide results under both, unsupervised domain adaptation and fully unsupervised settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We report both, Cumulative Matching Character- istics (CMC) at Rank1, Rank5, Rank10 accuracies, as well as mean Average Precision (mAP) on the testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the generation quality evaluation, we conduct a qualitative comparison between our method and state-of-the-art meth- ods on generated images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Implementation details We introduce implementation details pertained to network design and general training configurations, as well as three- step optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Network design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our network design related to the identity encoder Eid, the structure encoder Estr, the de- coder G and the discriminator D has been mainly inspired by [17], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In the following descriptions, we denote the size of feature maps in channel×height×width.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 1) Eid is an ImageNet [35] pre-trained ResNet50 [68] with slight modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The original fully connected layer is replaced by a batch normalization layer and a fully connected em- bedding layer, which outputs identity representations f in 512×1×1 for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In parallel, we add a part average pooling that outputs identity features fid in 2048×4×1 for the generative module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 2) Estr is composed of four convolutional and four residual layers, which output structure features fstr in 128×64×32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 3) G contains four residual and four convolutional layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Every residual layer contains two adaptive instance normalization layers [18] that transform fid into scale and bias parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4) D is a multi-scale PatchGAN [19] discriminator at 64×32, 128×64 and 256×128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' General training configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our framework is im- plemented under Pytorch [69] and trained with one Nvidia V100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The inputs are resized to 256×128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We empir- ically set a large weight λrecon = 5 for reconstruction in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' With a batch size of 16, we use SGD to train Eid and Adam optimizer to train Estr, G and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Learning rate in Adam is set to 1 × 10−4 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 × 10−4 in SGD and are multiplied by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 after 10 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DBSCAN maximal neighborhood distance is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 and minimal sample number is set to 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The number of negatives K is 8192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For testing, Eid outputs representations f of dimension 512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For video-based person ReID, due to our GPU memory constraint, we randomly sample 2 frames per tracklet on MARS and 8 frames per tracklet on DukeMTMC-VideoReID for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For testing, all the frames from each tracklet are used to calculate a unified tracklet representation for similarity ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Other settings are kept the same as image-based peron ReID settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Three-stage optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To reduce the noise from imperfect generated images at early epochs, we train the four modules Eid, Estr, G and D in a three-stage opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Stage 1 Eid warm-up: we use a state-of-the-art unsupervised ReID method to warm up Eid, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', ACT [55], MMCL [59] and JVTC [60].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Stage 2 Estr, G and D warm- up: we freeze Eid and warm up Estr, G, and D only with the overall GAN loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (6) for 40 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Stage 3 joint training: we bring in the memory bank and the pseudo labels to jointly train the whole framework with the overall loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (16) for another 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 8 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 74 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 1 Duke→Market 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 61 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 78 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 1 Market→Duke mAP Rank1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hyper-parameter analysis on α for mixup coefficient on Duke→Market and Market→Duke tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 β 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 74 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='06 τ mAP Rank1 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hyper-parameter analysis on β for memory momentum and τ for contrastive temperature on Duke→Market task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Unsupervised ReID Evaluation To validate the effectiveness of each component, we con- duct parameter analysis and ablation experiments with a JVTC [60] baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As JVTC+ is the enhanced version of JVTC with a camera temporal distribution post-processing, the performance boost from the post-processing is almost fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Thus, the ablation experiments show similar vari- ance with JVTC and JVTC+ baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We further compare our method with state-of-the-art unsupervised person ReID with three different baselines to show the generalizability of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89 89 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 74 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 Rank1 mAP 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 3 4 5 6 7 𝝀𝒗𝒊 𝝀𝒊𝒅 𝝀𝒓𝒆𝒄𝒐𝒏 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hyper-parameter analysis on balancing coefficients λrecon for reconstruction weight, λvi for rotation contrast weight and λmix for mixup contrast weight on Duke→Market task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' TABLE 5 Performance under different clustering neighborhood distance threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘N’ is the approximate number of pseudo-identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Threshold Duke→Market Market→Duke N mAP Rank1 N mAP Rank1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 ∼642 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 ∼840 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='45 ∼605 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 ∼810 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 ∼584 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 ∼786 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='55 ∼540 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 ∼744 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 ∼500 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 ∼697 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Parameter analysis Hyper-parameters, such as mixing coefficient α, memory momentum β and view-invariant contrastive loss temper- ature τ, play important roles inside our proposed GCL+ framework for better unsupervised person ReID perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We vary their values to analyze the sensitivity of each hyper-parameter inside our proposed framework GCL+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For Beta distribution, a larger α results in a higher pos- sibility that λ gets closer to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ReID performance on both Duke→Market and Market→Duke tasks with reference to α is reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' On both tasks, the optimal performance is achieved, in case of α is around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As a consequence, α is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 in our framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The value of β controls the memory updating speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The value of τ amplifies the cosine similarity between con- trastive views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' An overlarge or undersized value, generally speaking, introduces more noise for contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We report the performance variation with reference to β and τ on Duke→Market task in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We find that the performance is more sensitive to the similarity temperature τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Based on the results, we set β to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 and τ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The number of possible pseudo-identities N is related to clustering hyper-parameters, such as maximal neigh- borhood distance threshold and minimal cluster sample number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The distance threshold of DBSCAN is the maximal distance between two samples for one to be considered as in the neighborhood of the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A larger distance threshold enlarges the radius of a cluster, making more samples be considered into a same cluster (N becomes smaller).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As shown in Table 5, the threshold value only slightly affects ReID performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As our framework jointly optimize the generative and contrastive modules, we set weight coefficients to balance different loss functions in the two modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We vary the balancing coefficients λrecon, λvi and λmix in Equation (6) and (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The corresponding results are reported in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Overall, the different values in the tested range only slightly influence the final results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Based on the results, we set λrecon = 5, λvi = 1 and λmix = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Ablation study Contrastive learning methods strongly rely on data aug- mentation to create different augmented views for con- trasting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed GCL+ outperforms traditional con- trastive learning methods by replacing traditional data aug- mentation techniques with GAN-based augmentation tech- niques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To validate the effectiveness of our proposed GAN- based augmentation techniques and contrastive losses, we conduct ablation experiments on both Market-1501 and DukeMTMC-reID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Data augmentation techniques can be caterogized into id-unrelated and id-related augmen- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Id-unrelated augmentation creates intra-image vi- sual distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In contrast, id-related augmentation cre- ates inter-image visual distortions, which affects image identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We compare results of traditional and genera- tive data augmentation under fully unsupervised setting and domain adaptation setting in Table 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For traditional data augmentation, we use multiple popular person ReID 9 TABLE 6 Ablation study under fully unsupervised and UDA settings on traditional (w/o GAN) and generative (w/ GAN) data augmentation for the contrastive module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Multi’ refers to multiple commonly used data augmentation techniques for person ReID, including random flipping, padding, cropping and erasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Rotation’ refers to our proposed mesh-guided rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Mixup’ is conducted on image level, while ‘F-Mixup’ is conducted on feature level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fully unsupervised ID-unrelated ID-related Market Duke Multi Rotation Mixup F-Mixup D-Mixup mAP R1 R5 R10 mAP R1 R5 R10 w/o GAN Baseline 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 ✓ 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 ✓ ✓ 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 w/ GAN ✓ 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 ✓ ✓ 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 ✓ ✓ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 ✓ ✓ 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 UDA ID-unrelated ID-related Duke→Market Market→Duke Multi Rotation Mixup F-Mixup D-Mixup mAP R1 R5 R10 mAP R1 R5 R10 w/o GAN Baseline 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 ✓ 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 ✓ ✓ 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 w/ GAN ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 ✓ ✓ 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 ✓ ✓ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 TABLE 7 Ablation study on three view-invariant losses in Rotation Contrast and two prototype losses in Mixup Contrast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lvi L′ vi L′′ vi Lproto Lmix Duke→Market Market→Duke mAP R1 mAP R1 ✓ 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 ✓ ✓ 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 ✓ ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 ✓ ✓ ✓ ✓ 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 ✓ ✓ ✓ ✓ ✓ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 75% 80% 85% 90% 1 3 5 7 9 11 13 15 17 19 Trad Rot Full Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Normalized Mutual Information (NMI) during 20 joint training epochs on Market-1501.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Trad’ refers to traditional data augmentation techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Rot’ refers to id-unrelated mesh-guided rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ‘Full’ refers to combining id-unrelated mesh-guided rotation and id-related D-Mixup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' data augmentation techniques, including random flipping, padding, cropping and erasing [12], as id-unrelated aug- mentation and Mixup [26] as id-related augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Even with these traditional data augmentation, our contrastive module significantly outperforms the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' When we replace traditional data augmentation with generative data augmentation, the unsupervised person ReID performance can be further improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed mesh-guided rota- tion (Rotation) works better than the multiple commonly used data augmentation techniques (Multi) for id-unrelated augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Meanwhile, our proposed D-Mixup achieves better performance than the image-level Mixup and feature- level Mixup (F-Mixup) for id-related augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Effects on pseudo labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Robust identity representa- tions should have a better intra-class compactness and inter- class separability, which leads to better pseudo label quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We evaluate our pseudo label quality by measuring the Normalized Mutual Information (NMI) [71] between our pseudo labels and ground truth labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 8, traditional data augmentation (Trad) works well at the beginning, but ends up in a worse quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We argue that traditional data augmentation brings to the fore undesirable distortions on identity features, which easily leads to over- fitting for id-sensitive tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Deviating from that, GAN- based augmentation introduces more noise at the beginning, however avoids over-fitting in the final training epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In addition, our full GCL+ (Full) conducts both GAN-based id-unrelated and id-related augmentation, which achieves better pseudo label quality than only id-unrelated mesh- guided rotation (Rot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Contrastive loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To learn maximal invariance from gen- erated image and memory stored image, we have formed three positive pairs for Rotation Contrast, namely (f, fpos), (f, f ′ new) and (fpos, f ′ new).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' By maximizing the similarity be- tween these three positive pairs in Equation (8), (9) and (10), our objective is to build identity representations, which are invariant to instance-level pose, view-point and background variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Meanwhile, we use identity prototypes and mixed prototypes in Mixup Contrast to learn a smoother class-level decision boundary with Equation (12) and (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To confirm the contribution from these contrastive losses, we gradually add each into our framework and report the corresponding results in Table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The results indicate that our proposed contrastive losses effectively contribute to learning robust representations for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Comparison with state-of-the-art methods Image-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We compare our proposed GCL+ with state-of-the-art unsupervised ReID methods under three purely unsupervised and four unsupervised domain adaptation evaluation protocols.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We evaluate the performance of GCL+ with different baselines, including MMCL [59], JVTC [60] and ACT [55], to demonstrate the generalizability of our proposed method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Under the fully unsupervised setting, we report as- sociated results on Market-1501, DukeMTMC-reID and MSMT17 dataset in Table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We firstly provide results of state-of-the-art methods, including BUC [57], SoftSim [58], TSSL [61], MMCL [59], JVTC [60], JVTC+ [60], Meta- Cam [70], as well as our previous work GCL [9], on the three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed method GCL+ significantly improves the unsupervised person ReID performance from 10 TABLE 8 Comparison of fully unsupervised ReID methods (%) on Market1501, DukeMTMC-reID and MSMT17 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We test our proposed method on several baselines, see names in parentheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Method Reference Market1501 DukeMTMC-reID MSMT17 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 BUC [57] AAAI’19 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 SoftSim [58] CVPR’20 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 TSSL [61] AAAI’20 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 MMCL [59] CVPR’20 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 JVTC [60] ECCV’20 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 JVTC+ [60] ECCV’20 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 MetaCam [70] CVPR’21 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 GCL(MMCL) [9] CVPR’21 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 GCL(JVTC) [9] CVPR’21 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 GCL(JVTC+) [9] CVPR’21 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 GCL+(MMCL) This paper 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 GCL+(JVTC) This paper 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 GCL+(JVTC+) This paper 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 TABLE 9 Comparison of unsupervised domain adaptive ReID methods (%) between Market1501, DukeMTMC-reID and MSMT17 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We test our proposed method on several baselines, see names in parentheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Method Reference Duke→Market Market→Duke Market→MSMT17 Duke→MSMT17 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 mAP R1 R5 R10 ECN [7] CVPR’19 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 PDA [21] ICCV’19 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 CR-GAN [41] ICCV’19 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 SSG [54] ICCV’19 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 MMCL [59] CVPR’20 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 ACT [55] AAAI’20 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 DG-Net++ [17] ECCV’20 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 JVTC [60] ECCV’20 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 ECN+ [56] TPAMI’20 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 JVTC+ [60] ECCV’20 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 MMT [8] ICLR’20 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 CAIL [50] ECCV’20 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 MetaCam [70] CVPR’21 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 GCL(ACT) [9] CVPR’21 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 GCL(JVTC) [9] CVPR’21 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 GCL(JVTC+) [9] CVPR’21 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 GCL+(ACT) This paper 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 GCL+(JVTC) This paper 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 GCL+(JVTC+) This paper 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 the three baselines MMCL, JVTC and JVTC+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The proposed new D-Mixup and Mixup Contrast in our framework GCL+ consistently surpasses the performance of our previous work GCL with the three different baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' With the strong baseline JVTC+, our method achieves state-of-the-art perfor- mance on the three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Under the unsupervised domain adaptation setting, we report related results on four mainstream benchmarks, in- cluding Duke→Market, Market→Duke, Market→MSMT17 and Duke→MSMT17 in Table 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed method GCL+ additionally achieves better performance than state- of-the-art methods, including ECN [7], PDA [21], CR-GAN [41], SSG [54], MMCL [59], ACT [55], DG-Net++ [17], JVTC [60], ECN+ [56], JVTC+ [60], MMT [8], CAIL [50], Meta- Cam [70], as well as our previous work GCL [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Among these methods, PDA, CR-GAN and DG-Net++ share certain similarity with our proposed method GCL+, in that they are based on GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, PDA and DG-Net++ used either 2D skeleton or random gray-scaled images as guid- ance, which could not preserve body shape information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Further, PDA, CR-GAN and DG-Net++ did not manipulate identity features to generate in-between identity images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' CAIL [50] has considered cross-domain Mixup, where in- terpolated structures may introduce more noise on identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our proposed D-Mixup does not suffer from such interpolated structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In addition, cross-domain Mixup interpolates images from two domains, while our proposed D-Mixup interpolates intra-domain images, which is more flexible for fully unsupervised ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We compare our proposed GCL+ with state-of-the-art unsupervised video person ReID methods on MARS and DukeMTMC-VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' RACE [72] and EUG [67] leverage a labeled video tracklet per identity to initialize their models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' These one-example video-based ReID methods can not actually be considered as unsupervised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DAL [73], TAUDL [74] and UTAL [75] utilize camera labels of each tracklet and try to associate tracklets of a same person across different cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' OIM [76], BUC [57] and TSSL [61] are fully unsupervised video person ReID methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We use the fully unsupervised method BUC as our baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As shown in Table 10, our proposed methods GCL (view-point augmentation) and GCL+ (view-point and in-between identity augmentation) significantly outperform previous unsupervised video-based person ReID methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 11 TABLE 10 Comparison with the state-of-the-art methods on two video-based re-ID datasets, MARS and DukeMTMC-VideoReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The “Labels” column indicates the labels used in each method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' “OneEx” denotes the one-example annotation per identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' “Camera” refers to camera annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' “Baseline (BUC)” refers to our reproduced results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Method Labels MARS DukeMTMC-VideoReID mAP R1 R5 R10 mAP R1 R5 R10 RACE [72] OneEx 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 EUG [67] OneEx 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 DAL [73] Camera 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 TAUDL [74] Camera 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 UTAL [75] Camera 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 OIM [76] None 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 BUC [57] None 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 TSSL [61] None 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 Baseline (BUC [57]) None 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 GCL None 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='8 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 GCL+ None 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='9 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='5 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4 Generation Quality Evaluation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='1 Ablation study We conduct a qualitative ablation study, represented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 9 to demonstrate that our proposed contrastive module can improve generative quality for person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Unconditional GANs learn a data distribution via recon- struction and adversarial training of each image, which then generate new images that fit the learned distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, unconditional GANs generate from features of a single image and neglect the shared features of different images of one person (or class).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Conditional GANs generally use human-annotated identity labels to learn shared class- level features, which are more view-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Our pro- posed GCL+ introduces an unsupervised way to learn view- invariant class-level features for person image generation by contrasting pseudo positive views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We illustrate two examples respectively from the Market-1501 and DukeMTMC-reID datasets in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 9 to validate the effectiveness of our proposed contrastive mod- ule for person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Given a target person, a robust identity representation should contain salient fea- tures shared by the majority of observations in different view-points and poses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In the case that GCL+ is trained without Lcontrast, our generative module tends to focus only on salient features of original image (black backpack for the first example and blue jacket for the second example), while neglecting salient features of other images of the same person (yellow t-shirt for the first example and red backpack for the second example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The contrastive module ensures the consistency of identity features for generation in different poses and view-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='2 Comparison with state-of-the-art methods We conduct a qualitative comparison between our pro- posed method GCL+ and state-of-the-art GAN-based per- son ReID methods, including FD-GAN [20], IS-GAN [42], DG-NET [25] and DG-NET++ [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We re-implement these GAN-based person ReID methods based on their published source code and generate six images per real image of the Market-1501 dataset, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' FD-GAN, IS-GAN and DG-Net are supervised methods, which rely on human-annotated labels to learn robust identity-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We observe that images generated by FD-GAN and IS-GAN suffer from evident visual blur, which may lose detailed identity information after generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Compared + + 𝑓𝑖𝑑 𝑓𝑠𝑡𝑟 w/o 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 w/ 𝐿𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 + + Same ID example Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Qualitative ablation study on the effectiveness of contrastive loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' (15) for generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lcontrast allows for preserving salient features from other views (yellow t-shirt for the first example and red backpack for the second example) in identity representations for generation in different poses and view-points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' TABLE 11 Examples of 3D mesh guided generation on DukeMTMC-reID dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → 12 FD-GAN GCL+(ours) IS-GAN DG-Net Real DG-Net++ Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Comparison of generated images on Market-1501 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Examples of FD-GAN, IS-GAN, DG-Net, DG-Net++ and GCL+ are generated from same real images shown in the figure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We note that DG-Net++ and GCL+ are unsupervised methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' TABLE 12 Examples of 3D mesh guided generation on MSMT17 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 0° 45° 90° 135° 180° 225° 270° 315° → → to FD-GAN and IS-GAN, DG-Net can generate sharper images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' However, using randomly switched gray-scaled im- ages as guidance is prone to result in incoherent body shape and carrying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' More comparison on the generative quality between FD-GAN, IS-GAN, DG-Net and our method is provided in Supplementary Materials Section B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' As an UDA method, DG-Net++ uses cross-domain gray-scaled images as guidance, which, however, shares same problems in gen- eration as DG-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Different from DG-Net++, our proposed GCL+ is a fully unsupervised ReID method, which directly augments data diversity in the target domain without the need for a labeled source domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Moreover, an image in GCL+ is generated from its own rotated mesh, which helps to conserve body shape information and does not add extra carrying structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The generated images from GCL+ have higher quality and similarity to real images than other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' To validate the generative quality on DukeMTMC- reID and MSMT17 datasets, we provide more examples in Table 11 and Table 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Consistency in the id-related space and variance in the id-unrelated space validate the purity (disentanglement quality) of identity representations in our framework GCL+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' We further provide tracklet examples before and after our view-point rotation for video-based person ReID in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The results show that our method also works well for video-based person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='3 Failure case analysis We show some failure cases from the rotation generative model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Actually, when there exists inconsistent front-side and back-side patterns, the rotation-based genera- tion can hardly generate accurate images after large rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Rotate Rotate MARS tracklet DukeMTMC-VideoReID tracklet Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Examples of tracklet frames before and after our view-point ro- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tracklets are respectively sampled from MARS and DukeMTMC- VideoReID datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For example, the model may consider visual patterns only in the back side (backpack in the first row) and patterns only in the front side (carrying objects in the second row) as whole-body appearance features for generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' One possi- ble solution is to use a 3D human-object arrangement mesh generator [77] to help the generative model distinguish humans and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 5 CONCLUSION In this paper, we propose an enhanced joint generative and contrastive learning (GCL+) framework for unsupervised person ReID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The framework is composed of a generative module for data augmentation, as well as a contrastive module aimed at learning invariance from generated variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the generative module, we propose a 3D mesh guided GAN to realize id-unrelated and id-related augmentation by respec- tively rotating 3D meshes as generation guidance and in- terpolating two identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' For the contrastive 13 0° 45° 90° 135° 180° 225° 270° 315° real Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Failure cases of rotation-based generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' First row: the back- pack can be generated onto the front side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Second row: the carrying object can be generated onto the back side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' module, we design Rotation Contrast and Mixup Contrast, re- spectively for the two data augmentation techniques to learn robust identity representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Extensive experiments are conducted to validate the superiority of the proposed GAN- based augmentation over traditional augmentation tech- niques for contrastive representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The genera- tive module benefits from learned robust identity represen- tations that preserve fine-grained identity information for better generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' GCL+ outperforms state-of-the-art methods under both, fully unsupervised and unsupervised domain adaptation settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Moreover, our contrastive mod- ule can be regarded as a contrastive discriminator in a GAN, which provides a new unsupervised approach for identity- preserving person image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' ACKNOWLEDGMENTS This work has been supported by the French government, through the 3IA Cˆote d’Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' The authors are grateful to the OPAL infrastructure from Universit´e Cˆote d’Azur for providing resources and support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' REFERENCES [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Shen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Shao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE TPAMI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [2] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Karanam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Rates-Borras, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Camps, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Radke, “A systematic evaluation and benchmark for person re- identification: Features, metrics, and datasets,” IEEE TPAMI, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [3] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [4] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lagadec, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bremond, “Learning discriminative and generalizable representations by spatial-channel partition for person re-identification,” in WACV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Song, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xiang, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hospedales, “Gen- eralizable person re-identification by domain-invariant mapping network,” in CVPR, June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [6] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Jin, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zeng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, “Style normaliza- tion and restitution for generalizable person re-identification,” in CVPR, June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [7] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Invariance matters: Exemplar memory for domain adaptive person re-identification,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ge, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, “Mutual mean-teaching: Pseudo la- bel refinery for unsupervised domain adaptation on person re- identification,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [9] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lagadec, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Dantcheva, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bremond, “Joint generative and contrastive learning for unsupervised per- son re-identification,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kornblith, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Norouzi, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hinton, “A simple framework for contrastive learning of visual representations,” in ICML, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xie, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Girshick, “Momentum contrast for unsupervised visual representation learning,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [12] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Random erasing data augmentation,” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [13] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Goodfellow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Pouget-Abadie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Mirza, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Warde-Farley, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ozair, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Courville, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bengio, “Generative adversarial nets,” in NeurIPS, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [14] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wei, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, “Person transfer gan to bridge domain gap for person re-identification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bak, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Carr, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lalonde, “Domain adaptation through synthesis for unsupervised person re-identification,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [16] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Generalizing a person retrieval model hetero- and homogeneously,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [17] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zou, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kumar, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kautz, “Joint disentangling and adaptation for cross-domain person re- identification,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [18] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Huang and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in ICCV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [19] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Isola, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhou, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Efros, “Image-to-image translation with conditional adversarial networks,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [20] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ge, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhao, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, “Fd- gan: Pose-guided feature distilling gan for robust person re- identification,” in NeurIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [21] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, “Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [22] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Simon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kanazawa, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Black, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Jacobs, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Malik, “End-to-end recovery of human shape and pose,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [24] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Camera style adaptation for person re-identification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [25] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kautz, “Joint discriminative and generative learning for person re- identification,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [26] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cisse, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Dauphin, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lopez-Paz, “mixup: Beyond empirical risk minimization,” in ICLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [27] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Verma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lamb, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Beckham, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Najafi, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Mitliagkas, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lopez- Paz, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bengio, “Manifold mixup: Better representations by interpolating hidden states,” in ICML, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [28] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Beckham, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Honari, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Verma, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lamb, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ghadiri, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hjelm, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bengio, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Pal, “On adversarial mixup resynthesis,” NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [29] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hadsell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chopra, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' LeCun, “Dimensionality reduction by learning an invariant mapping,” in CVPR, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [30] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xiong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yu, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Caron, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Misra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Mairal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Goyal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bojanowski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” in NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Grill, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Strub, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Altch´e, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tallec, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Richemond, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Buchatskaya, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Doersch, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Pires, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Guo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Azar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=', “Bootstrap your own latent: A new approach to self- supervised learning,” in NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [33] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He, “Exploring simple siamese representation learning,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [34] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fan, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Girshick, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='04297, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [35] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Russakovsky, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Deng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Krause, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Satheesh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Karpathy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Khosla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bernstein, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Berg, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fei-Fei, “Imagenet large scale visual recognition challenge,” IJCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [36] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Unlabeled samples generated by gan improve the person re-identification baseline in vitro,” in ICCV, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [37] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Radford, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Metz, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chintala, “Unsupervised represen- tation learning with deep convolutional generative adversarial networks,” in ICLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [38] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Qian, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xiang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Qiu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Jiang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xue, “Pose-normalized image generation for person re- identification,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 14 [39] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Park, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Isola, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Efros, “Unpaired image-to- image translation using cycle-consistent adversarial networks,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [40] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Huang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, “Sbsgan: Suppression of inter-domain background shift for person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [41] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, “Instance-guided context rendering for cross-domain person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [42] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Eom and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ham, “Learning disentangled representation for robust person re-identification,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [43] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tokozume, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ushiku, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Harada, “Between-class learning for image classification,” in CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [44] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Han, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Oh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Choe, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [45] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Berthelot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Carlini, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Goodfellow, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Papernot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Oliver, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Raffel, “Mixmatch: A holistic approach to semi-supervised learning,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [46] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Berthelot, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Carlini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cubuk, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kurakin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sohn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Raffel, “Remixmatch: Semi-supervised learn- ing with distribution matching and augmentation anchoring,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [47] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ni, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, “Adversarial domain adaptation with domain mixup,” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [48] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sebe, “Openmix: Reviving known knowledge for discovering novel visual cate- gories in an open world,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [49] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hendrycks, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Mu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cubuk, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zoph, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gilmer, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lak- shminarayanan, “Augmix: A simple data processing method to improve robustness and uncertainty,” in ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [50] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Song, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, “Generalizing person re- identification by camera-aware invariance learning and cross- domain mixup,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [51] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, “Transferable joint attribute- identity deep learning for unsupervised person re-identification,” CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [52] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kot, “Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification,” in BMVC, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [53] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lai, “Unsu- pervised person re-identification by soft multilabel learning,” in CVPR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [54] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wei, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Shi, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Huang, “Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification,” in ICCV, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [55] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cheng, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Guo, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Huang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ji, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, “Asymmetric co-teaching for unsuper- vised cross-domain person re-identification.” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [56] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Learning to adapt invariance in memory for person re-identification,” IEEE TPAMI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [57] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “A bottom-up clustering approach to unsupervised person re-identification,” in AAAI, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [58] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yan, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, “Unsupervised person re-identification via softened similarity learning,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [59] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, “Unsupervised person re-identification via multi-label classification,” in CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [60] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, “Joint visual and temporal consistency for unsupervised domain adaptive person re-identification,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [61] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, “Tracklet self-supervised learning for unsupervised person re-identification.” in AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [62] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, “Re-ranking person re- identification with k-reciprocal encoding,” in CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [63] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ester, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kriegel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sander, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in KDD, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [64] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Shen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, “Scalable person re-identification: A benchmark,” ICCV, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [65] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ristani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Solera, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zou, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cucchiara, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tomasi, “Per- formance measures and a data set for multi-target, multi-camera tracking,” in ECCVW, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [66] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Su, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tian, “Mars: A video benchmark for large-scale person re- identification,” in ECCV, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [67] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Dong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ouyang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, “Ex- ploit the unknown gradually: One-shot video-based person re- identification by stepwise learning,” CVPR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [68] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ren, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Sun, “Deep residual learning for image recognition,” in CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [69] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Paszke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gross, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lerer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bradbury, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chanan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Killeen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gimelshein, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Antiga, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Desmaison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' K¨opf, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' DeVito, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Raison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Tejani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chilamkurthy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Steiner, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Bai, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [70] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Luo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Cai, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Nicu, “Joint noise- tolerant learning and meta camera shift adaptation for unsuper- vised person re-identification,” in CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [71] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Strehl and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ghosh, “Cluster ensembles — a knowledge reuse framework for combining multiple partitions,” JMLR, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [72] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lan, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yuen, “Robust anchor embedding for unsupervised video person re-identification in the wild,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [73] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, “Deep association learning for unsupervised video person re-identification,” in BMVC, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [74] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Gong, “Unsupervised person re- identification by deep learning tracklet association,” in ECCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [75] ——, “Unsupervised tracklet person re-identification,” IEEE trans- actions on pattern analysis and machine intelligence, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [76] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Xiao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Lin, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Wang, “Joint detection and identification feature learning for person search,” CVPR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' [77] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Pepose, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Joo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Ramanan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Malik, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Kanazawa, “Perceiving 3d human-object spatial arrangements from a single image in the wild,” in ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Hao Chen received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' degree from Wuhan University in 2014, and the M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' degree from CentraleSup´elec and Universit´e Paris Saclay in 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is currently working towards his Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' at Inria Sophia Antipolis and Universit´e Cˆote d’Azur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' His research interests include person re- identification and unsupervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Home- page: https://chenhao2345.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='io/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Yaohui Wang received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' degree from Xidian University in 2015, and the M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' de- gree from ENSIIE and Universit´e Paris Saclay in 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is currently working towards his Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' at Inria Sophia Antipolis, STARS Team and Universit´e Cˆote d’Azur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' His current research focuses on image and video synthesis, activity recognition and representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Benoit Lagadec is a Research Engineer at Eu- ropean Systems Integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He currently works on developing video analysis solutions based on abnormal human behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Previously, he worked in public research at Ifremer, where he was able to develop image processing algo- rithms adapted to the difficulty of underwater imaging : denoising, segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' 二15 Antitza Dantcheva is a Research Scientist (CRCN) with the STARS team of INRIA Sophia Antipolis, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Previously, she was a Marie Curie fellow at Inria and a Postdoctoral Fellow at the Michigan State University and the West Virginia University, USA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' She received her Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' degree from T´el´ecom ParisTech/Eurecom in im- age processing and biometrics in 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Her re- search is in computer vision and specifically in designing algorithms that seek to learn suitable representations of the human face in interpreta- tion and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' Francois Bremond received the PhD degree from INRIA in video understanding in 1997, and he pursued his research work as a post doc- torate at the University of Southern California (USC) on the interpretation of videos taken from Unmanned Airborne Vehicle (UAV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' In 2007, he received the HDR degree (Habilitation a Diriger des Recherches) from Nice University on Scene Understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He created the STARS team on the 1st of January 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is the research director at INRIA Sophia Antipolis, France.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He has conducted research work in video understanding since 1993 at Sophia- Antipolis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is author or co-author of more than 140 scien- tific papers published in international journals or conferences in video understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is a handling editor for MVA and a reviewer for several international journals (CVIU, IJPRAI, IJHCS, PAMI, AIJ, Eurasip, JASP) and conferences (CVPR, ICCV, AVSS, VS, ICVS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He has (co- )supervised 26 PhD theses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} +page_content=' He is an EC INFSO and French ANR Expert for reviewing projects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zNAyT4oBgHgl3EQf0vni/content/2301.00725v1.pdf'} diff --git a/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..55bd4e169935e97241cc6dda5aaee39935916d6b --- /dev/null +++ b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/2301.00968v1.pdf.txt @@ -0,0 +1,451 @@ +Interpretation and Analysis of the Steady-State Neural Response to +Complex Sequential Structures: a Methodological Note + +Nai Ding +College of Biomedical Engineering and Instrument Science, +Zhejiang University, Hangzhou, China + + +Abstract +Frequency tagging is a powerful approach to investigate the neural processing of +sensory features, and is recently adapted to study the neural correlates of +superordinate structures, i.e., chunks, in complex sequences such as speech and +music. The nesting of sequence structures, the necessity to control the periodicity in +sensory features, and the low-frequency nature of sequence structures pose new +challenges for data analysis and interpretation. Here, I discuss how to interpret the +frequency of a sequential structure, and factors that need to be considered when +analyzing the periodicity in a signal. Finally, a safe procedure is recommended for the +analysis of frequency-tagged responses. + + + + +1. Introduction +Frequency tagging is a power technique to extract the neural response tracking a +stimulus feature. In general, in the frequency tagging paradigm, a target stimulus +feature is periodically modulated at a frequency f. Consequently, the neural response +that dynamically tracks the stimulus feature also fluctuates at frequency f. The f-Hz +frequency tagged response is often extracted using the Discrete Fourier Transform +(DFT) or wavelet transform. Frequency-tagging is a powerful paradigm for +electroencephalography (EEG) and magnetoencephalography (MEG) studies since it +can extract any neural response that follows the f-Hz change in the stimulus, +regardless of the latency or waveform of the response. The paradigm has been widely +applied to study visual (Norcia et al., 2015; Regan, 1977) and auditory (Galambos et +al., 1981; Picton et al., 2003) processing: The frequency-tagged response to periodic +changes in visual features, e.g., luminance, is referred to as the Steady State Visual +Evoked Potentials (SSVEP), while the frequency-tagged response to periodic changes +in auditory features, e.g., intensity, is referred to as the auditory Steady State +Response (aSSR). These responses are widely applied to study the basic properties of +sensory encoding (Herrmann, 2001; Ross et al., 2000; Wang et al., 2012; Wong et al., +2007) and cognitive control (Andersen et al., 2008; Elhilali et al., 2009; Gao et al., +2021). + +More recently, the frequency-tagging paradigm has been applied to study the neural +processing of superordinate structures in complex sequences, e.g., speech and music: +The hypothesis in these studies is that a mentally constructed superordinate sequence +structure, i.e., a sentence, is neurally represented by a response whose duration +matches the duration of the structure in the stimulus (Buiatti et al., 2009; Ding et al., +2016; Nozaradan et al., 2011). On the one hand, frequency tagging provides a + +powerful paradigm to investigate the neural processing of a chunk in contrast to a +brief stimulus event and has stimulates a large number of studies (Batterink & Paller, +2019; Benitez-Burraco & Murphy, 2019; Choi et al., 2020; Glushko et al., 2022; +Henin et al., 2021; Kaufeld et al., 2020; Kazanina & Tavano, 2022; Keitel et al., 2018; +Lo et al., 2022; Lu et al., 2021; Makov et al., 2017; Meng et al., 2021; Meyer, 2018). +On the other hand, the complexity of the sequence processing problem has also +caused more challenges to the analysis and interpretation of the frequency-tagged +responses. First, in traditional frequency tagging studies, each stimulus feature of +interest is tagged at a distinct frequency, while the structures in a complex sequence +are often nested so that different levels of structures cannot be tagged at unrelated +arbitrary frequencies. For example, in a sentence “the cute boy smiled”, the first three +words construct a noun phrase based on syntax. Nevertheless, the 3-word noun phrase +and the 4-word sentence are nested so that they cannot be frequency tagged at +unrelated frequencies. The nesting between structures lead to a dissociation between +structure duration and structure repetition period, which is discussed in Section 2.1. + +Second, traditional frequency tagging studies explicitly create periodic changes in a +stimulus feature while the studies on sequence structures sometimes want to avoid +such periodic changes in basic stimulus features to isolate the neural response +generated by internal mental processes. What is a neural response generated by +internal mental processes? For example, a metrical structure may be imagined when +listening to an isochronous beat sequence, and the neural response at the imagined +meter rate can reflect internally driven processes (Nozaradan et al., 2011). Similarly, +when a sequence of words is grouped into sentences based on syntactic rules, the +neural response at the sentence rate can reflect higher-level sentence processing (Ding +et al., 2016). In these situations, however, if a basic sensory feature has the same + +periodicity as the imagined meter or syntactically constructed sentence, it is +ambiguous whether the neural response tracks the sensory feature or the sequence +structures. Therefore, it is often necessary to check the periodicity in stimulus +features. Cautions, however, are needed since some types of periodicities are not +captured by the Fourier transform, which is discussed in Section 2.2. + +Third, the analysis of responses to frequency-tagged sequence structures is sometimes +prone to artifacts that seldomly affect the analysis of traditional frequency tagged +responses. For sequence structures often correspond to a very low frequency, e.g., < 3 +Hz, and such a low-frequency may be contaminated by overlapping in the analysis +epochs (Benjamin et al., 2021). Section 3 illustrates why such artifacts may be +generated and discusses potential guidelines for appropriate analysis of the frequency +tagged responses, including the selection of analysis duration and whether a +smoothing window should be used. This article discusses common technical issues, +instead of the analysis of a specific experiment. However, to facilitate interpretation, a +hypothetical experiment is provided in Fig. 1A, but the conclusions are not limited to +this example. On the other hand, the target audience is experimentalists instead of +engineers. Therefore, article attempts to explain ideas using illustrations and skips +mathematical derivations. The mathematical basis of the DFT can be found in classic +textbooks such as Oppenheim et al. (2001). + +2. What is not reflected by the Fourier transform +2.1 Frequency may not reflect the time constant or signal duration +For frequency-domain analysis, a central concept is frequency, which corresponds to +the period of a signal. The period of a signal, however, does not necessarily coincide +with other time constants of a signal. For example, an exponential signal e-t/τ has a + +time constant τ, but the signal is aperiodic and τ is not a period of the signal. Even for +a periodic signal, its period may dissociate from the time constant or duration of the +waveform within a period, and some examples are shown in Fig. 1B. In these +examples, the signals have a period of 1 s, and the waveform within a period is shown +in the left panel. The temporal parameters of the signal, including the time constant of +an exponential function, duration of a sawtooth signal, and frequency of a single-cycle +sinusoid, affect the shape of the Fourier spectrum but generally do not lead to any +spectral peak corresponding to these parameters. Instead, since the signal repeats at a +rate of 1 Hz, the spectrum shows peaks at 1 Hz and its harmonically related +frequencies, i.e., 2 Hz, 3 Hz, etc. When the period of the signal changes, however, the +spectral peaks shift accordingly, even if the waveform within a cycle remains +unchanged (Fig. 1C). See Zhou et al. (Zhou et al., 2016) for more illustrations about +how the spectrum is influenced by the signal repetition rate and the waveform within +a period. + + + + +Figure 1. Peaks in the spectrum reflects the periodicity of a signal. A) A hypothetical +experiment condition, in which a noun phrase (NP) is embedded in a sentence (S). +The duration of the NP is either 0.75 s or 0.5 s, and a neural response is hypothesized +to be modulated the duration of the NP. B) Signals that repeat every 1 s and the +corresponding spectra. The left panel shows the waveform within a period, and the +black and blue curves have different time constants, i.e., 0.75 s and 0.5 s respectively. +The right panel shows the spectrum that is the magnitude of the DFT transform of 10 +periods of the corresponding signal. The time constant and the corresponding +frequency are shown by the vertical dotted lines. The spectrum has peaks at 1 Hz, 1 +over the signal period, and harmonically related frequencies, regardless of the time +constant of the signal within a period. C) Signals that are constructed by the same +sawtooth waveform but have different repetition rates. The spectral peaks always +reflect the repetition rate. + +NP(0.5 s) +S (1 s) +S(1 s) +the +cute +boy +smiled +my +friend +likes +tea +0.5 +0.25 +0.5 +0.75 +0.25 +0.25 +0.25 +0.75 +0////1.2 Frequency may not reflect the rate of change +Suppose a signal changes every T s. Intuitively, its Fourier spectrum should peak at +1/T Hz. This intuition, however, is not always true and an example is given in Fig. 2, +in which the spectrum shows troughs at 1/T Hz and harmonically related frequencies. +When the signal is employed to modulate the gain of a 4-Hz sinusoid, the modulated +sinusoid does not show any power at 1/T Hz either. The purpose of these examples is +to show that the Fourier transform may be blind to some rhythms. Why does the +signal lack power at 1/T Hz? In the Fourier transform, the power at f is determined by +the dot product between the signal and sinusoids at frequency f (including both sine +and cosine). The signals in Fig. 2 contain no fluctuations within each T s and therefore +the signal has no correlation with sinusoids at 1/T Hz. Figure 3 illustrates the dot +product between signals. + + + +Figure 2. The change rate of a signal can correspond to troughs in the spectrum. The +upper panel shows a signal that changes once every 1 s, and the lower panel is a 4-Hz +sinusoid that is amplitude modulated by the signal on the upper panel. In the +spectrum, troughs are observed at 1 Hz and harmonically related frequencies. + + + + + +Figure 3. Illustration of the dot product between signals, which is the basis of the +DFT. A) A 3-Hz sinusoid, which is employed to calculated the DFT coefficient at 3 +Hz. BC) Signals to analyze and their point-by-point product with the reference signal. +The top signal is similar to the signal in Fig. 2, while the other 3 signals are sinusoids +with the frequency shown by the number in panel B. The sum of the product signal, +i.e., the dot product between the two signals, is shown by the number in red in panel +C. DE) Examples of signals that have nonzero dot product with the reference signal. + + +2. Effects of the neural response analysis method +2.1 Overlapping epochs can introduce artifacts +A rhythm can be created based on an arbitrary signal by adding delayed versions of +the signal to itself. An illustration is shown in Fig. 4A, in which the signal to analyze +only consists of a pulse at 4.8 s and is 0 otherwise. When the signal is chunked into 5- +s epochs with 4-s overlap, however, the averaged epoch clearly becomes periodic and +the period is the same as the distance between adjacent epochs, e.g., 1 s. Another + +0.2 +0.4 +0.6 +0.8 +0.0 +0.0 +0.0 +0.0 +0.5 +0.5 +1.9 +3.7 +6.3 +3.7 +0.5 +0.5example is shown in Fig. 4B, in which a white noise is chunked into 5 s epochs in the +same way. The spectrum averaged over 100 epochs clearly shows a peak at 1 Hz. In +fact, in hearing research, this method has been employed to generate pitch perception +based on, e.g., white noise (Yost, 1996). + +If inappropriate data epoching can introduce artifacts, why not directly applying the +Fourier transform to the unepoched data? A direct Fourier transform to the unepoched +data can indeed yield a high-frequency-resolution spectrum of the response. +Nevertheless, in real EEG/MEG recordings, strong artifacts caused by, e.g., head +movements or hardware glitches, can barely be avoided during a long recording, and +excluding recordings with large artifacts from further analyses is a common practice +in EEG/MEG analysis. It is nonoptimal, however, to throw away a long recording +based on a few sparsely located artifacts. Therefore, segmenting a long recording into +shorter epochs and only removing epochs with obvious artifacts is a common strategy + +2.2 Analysis window determines the width of spectral peaks +Suppose a frequency-tagged neural response has a period of T s, and D seconds of +recording is transformed into the frequency domain using the DFT. The DFT +spectrum consists of coefficients corresponding to discrete frequencies, i.e., 1/D Hz, +2/D Hz, 3/D Hz, etc. If D is a multiple of T, the frequency-tagged response is resolved +in the spectrum. In other words, if D = kT, where k is an integer, the kth DFT +coefficient corresponds to 1/T Hz, i.e., the target frequency. In this case, the response +spectrum only has power at 1/T Hz and harmonically related frequencies. An example +is shown in Fig. 5A (upper panel), where T is 0.5 s, D is 5 s, and the neural response is +exactly a sinusoid. The response spectrum has a sharp peak at 4 Hz and the power in + +adjacent frequency bins is 0. The DFT coefficient not at 4 Hz is zero since the dot +product between any two D-s long sinusoids at frequencies resolved by the DFT is +zero (Fig. 3BC). When D is not a multiple of T, however, the DFT spectrum does not +have a frequency bin corresponding to 1/T Hz and the power of the signal spreads to +many frequency bins near 1/T Hz, a phenomenon known as frequency leakage. An +example is shown Fig. 5B (upper panel), where T is still 0.5 s but D is 5.1 s. +. + +Figure 4. Overlapping epochs can lead to spurious peaks in the spectrum. A) A +nonperiodic signal that is composed of a single pulse. B) The signal in A is segmented +into 5-s epochs that have 4-s overlap with each other. C) The average of the epochs in +B. D) The same epoching process is applied to white noise and the resulting +waveform is shown. E) The spectrum of the signal in D. To obtain a robust result, +twenty independent white noise is generated and processed the same way and the +spectra are averaged. + + + +1.5 +0.4 +0.2 +0.5 +1.5 +3.5A common strategy to alleviate frequency leakage is to multiply a smoothing window +to the signal before the Fourier transform. The spectra of the windowed signals are +shown in Fig. 5 (lower panel). With the smoothing window, the signal duration no +longer strongly affects the shape of the spectrum, but the spectrum always has +nonzero power in frequency bins near the target frequency, i.e., 2 Hz. The main +difference between the methods in Fig. 5 is whether all the power of a sinusoid +concentrates in a single frequency bin or spreads to several bins. It is not further +illustrated but the conclusions apply to other variations in the analysis method, such as +padding zeros to the signal or using the wavelet transform instead Fourier transform. + +Shall we care about whether the signal power concentrates in a single frequency bin +or not? The answer is yes in conditions. For example, a convenient approach to test +the statistical significance of a frequency tagged response is to compare the power at +the target frequency with the power in adjacent frequency bins (Benjamin et al., 2021; +Ding et al., 2016; Nozaradan et al., 2011). The statistical power of this approach is +clearly compromised when the power in adjacent frequency bins is elevated from +baseline. Even when the statistical significance of the frequency-tagged response is +tested using other methods, e.g., in comparison with a control condition that does not +have the frequency-tagged response (Andersen et al., 2008), the statistical power of +the test can benefit from concentrating all power of the frequency-tagged response +into a single frequency. + +More generally, when the periodicity of a signal is unknown and needs to be +determined using the Fourier analysis, a smoothing window often helps. Nevertheless, +in the frequency-tagging paradigm, the target frequency is known and therefore a + +smoothing window is not necessary. In other words, in the frequency-tagging +approach, the purpose of data analysis is not to estimate the periodicity of a response +but to detect the presence of a response with a known frequency. Based on the signal +detection theory (Poor, 1998), optimal detection of a sinusoid generally involves +calculating the dot product between the recorded signal and the target signal, which +can be viewed a sinusoid in the frequency tagging paradigm, and such dot product can +be conveniently calculated using the DFT. + + +Figure 5. Frequency leakage and windowing. The signal to analyze is a 2-Hz sinusoid +and the duration of the signal is 5 s in panel A and 5.1 s in panel B. The upper panel is +the DFT of the signal and the lower panel is the DFT of the signal smoothed by a +Hanning window. + +3. Summary +First, in the frequency tagging paradigm, the target frequency is the frequency at +which a stimulus feature or sequence structure repeats, which in general does not +relate to how long the feature or structure lasts or how fast it varies within each +period. Second, the Fourier transform does not provide a one-size-fits-all solution to +extract all periodicities in a signal. On the stimulus side, cautions are needed, e.g., +when making sure that a stimulus does not contain any conceivable periodicity at a +target frequency. On the response side, more advanced feature extraction methods +may be necessary to identify a frequency-tagged response. For example, for the + +0.5 +0.5 +2.5signals in Fig. 2, taking the absolute value of the first-order derivative of the signal +can reflect the 1-Hz periodicity in the signal. + +Finally, I recommend the following as a relatively safe procedure to analyze +frequency-tagged responses. +(1) The response being analyzed should contain exactly an integer number of periods +of the frequency-tagged response. More specifically, if the response sampling rate is F +and the response is frequency tagged at f, the number of samples per cycle of the +response is F/f, which does not need to be an integer. Nevertheless, if k cycles are +included in the analysis window, the total length of the analysis window, i.e., kF/f, +should be an integer and k is also an integer. +(2) When the stimulus lasts for a very long duration (e.g., several minutes), and the +response recorded throughout the presentation of the stimulus can be directly +transform into the frequency domain. Alternatively, it can be segmented into shorter +epochs, e.g., to remove epochs with large artifacts, and averaged. The epochs, +however, should not overlap. +(3) No smoothing window is necessary when performing the Fourier analysis, when +the target frequency is known and the analysis window contains an integer number of +cycles of the target response. + + + + +Acknowledgement +I thank Wenhui Sun for helping formatting the bibliography. This work was supported +by the National Natural Science Foundation of China (32222035) and Key R & D +Program of Zhejiang (2022C03011). + +Reference +Andersen, S. K., Hillyard, S. A., & Muller, M. M. (2008). Attention facilitates +multiple stimulus features in parallel in human visual cortex. Curr Biol, +18(13), 1006-1009. https://doi.org/10.1016/j.cub.2008.06.030 +Batterink, L. J., & Paller, K. A. (2019). Statistical learning of speech regularities can +occur outside the focus of attention. Cortex, 115, 56-71. +https://doi.org/10.1016/j.cortex.2019.01.013 +Benitez-Burraco, A., & Murphy, E. (2019). Why Brain Oscillations Are Improving +Our Understanding of Language. Front Behav Neurosci, 13, 190. +https://doi.org/10.3389/fnbeh.2019.00190 +Benjamin, L., Dehaene-Lambertz, G., & Fló, A. (2021). Remarks on the analysis of +steady-state responses: Spurious artifacts introduced by overlapping epochs. +Cortex, 142. https://doi.org/10.1016/j.cortex.2021.05.023 +Buiatti, M., Peña, M., & Dehaene-Lambertz, G. (2009). Investigating the neural +correlates of continuous speech computation with frequency-tagged +neuroelectric responses. Neuroimage, 44(2), 509-519. +https://doi.org/https://doi.org/10.1016/j.neuroimage.2008.09.015 +Choi, D., Batterink, L. J., Black, A. K., Paller, K. A., & Werker, J. F. (2020). Preverbal +Infants Discover Statistical Word Patterns at Similar Rates as Adults: Evidence +From Neural Entrainment. Psychological Science, 31(9), 1161-1173. +https://doi.org/10.1177/0956797620933237 +Ding, N., Melloni, L., Zhang, H., Tian, X., & Poeppel, D. (2016). Cortical tracking of +hierarchical linguistic structures in connected speech. Nat Neurosci, 19(1), +158-164. https://doi.org/10.1038/nn.4186 +Elhilali, M., Xiang, J., Shamma, S. A., & Simon, J. Z. (2009). Interaction between + +attention and bottom-up saliency mediates the representation of foreground +and background in an auditory scene. PLoS Biol, 7(6), e1000129. +https://doi.org/10.1371/journal.pbio.1000129 +Galambos, R., Makeig, S., & Talmachoff, P. J. (1981). A 40-Hz auditory potential +recorded from the human scalp. Proceedings of the National Academy of +Sciences, 78(4), 2643-2647. https://doi.org/10.1073/pnas.78.4.2643 +Gao, X., Wang, Y., Chen, X., & Gao, S. (2021). Interface, interaction, and intelligence +in generalized brain–computer interfaces. Trends in Cognitive +Sciences, 25(8), 671-684. https://doi.org/10.1016/j.tics.2021.04.003 +Glushko, A., Poeppel, D., & Steinhauer, K. (2022). Overt and implicit prosody +contribute to neurophysiological responses previously attributed to +grammatical processing. Scientific Reports, 12(1), 14759. +https://doi.org/10.1038/s41598-022-18162-3 +Henin, S., Turk-Browne, N. B., Friedman, D., Liu, A., Dugan, P., Flinker, A., Doyle, +W., Devinsky, O., & Melloni, L. (2021). Learning hierarchical sequence +representations across human cortex and hippocampus. Science Advances, +7(8), eabc4530. https://doi.org/10.1126/sciadv.abc4530 +Herrmann, C. S. (2001). Human EEG responses to 1-100 Hz flicker: resonance +phenomena in visual cortex and their potential correlation to cognitive +phenomena. Exp Brain Res, 137(3-4), 346-353. +https://doi.org/10.1007/s002210100682 +Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. +(2020). Linguistic Structure and Meaning Organize Neural Oscillations into a +Content-Specific Hierarchy. J Neurosci, 40(49), 9467-9475. +https://doi.org/10.1523/JNEUROSCI.0302-20.2020 +Kazanina, N., & Tavano, A. (2022). What neural oscillations can and cannot do for +syntactic structure building. Nature Reviews Neuroscience. +https://doi.org/10.1038/s41583-022-00659-5 +Keitel, A., Gross, J., & Kayser, C. (2018). Perceptually relevant speech tracking in +auditory and motor cortex reflects distinct linguistic features. PLoS Biol, + +16(3), e2004473. https://doi.org/10.1371/journal.pbio.2004473 +Lo, C.-W., Tung, T.-Y., Ke, A. H., & Brennan, J. R. (2022). Hierarchy, Not Lexical +Regularity, Modulates Low-Frequency Neural Synchrony During Language +Comprehension. Neurobiology of Language, 3(4), 538-555. +https://doi.org/10.1162/nol_a_00077 +Lu, L., Sheng, J., Liu, Z., & Gao, J. H. (2021). Neural representations of imagined +speech revealed by frequency-tagged magnetoencephalography responses. +Neuroimage, 229, 117724. https://doi.org/10.1016/j.neuroimage.2021.117724 +Makov, S., Sharon, O., Ding, N., Ben-Shachar, M., Nir, Y., & Zion Golumbic, E. +(2017). Sleep Disrupts High-Level Speech Parsing Despite Significant Basic +Auditory Processing. J Neurosci, 37(32), 7772-7781. +https://doi.org/10.1523/JNEUROSCI.0168-17.2017 +Meng, Q., Hegner, Y. L., Giblin, I., McMahon, C., & Johnson, B. W. (2021). +Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and +Degraded Speech. Cereb Cortex, 31(1), 591-602. +https://doi.org/10.1093/cercor/bhaa245 +Meyer, L. (2018). The neural oscillations of speech processing and language +comprehension: state of the art and emerging mechanisms. Eur J Neurosci, +48(7), 2609-2621. https://doi.org/10.1111/ejn.13748 +Norcia, A. M., Appelbaum, L. G., Ales, J. M., Cottereau, B. R., & Rossion, B. (2015). +The steady-state visual evoked potential in vision research: A review. Journal +of Vision, 15(6), 4-4. https://doi.org/10.1167/15.6.4 +Nozaradan, S., Peretz, I., Missal, M., & Mouraux, A. (2011). Tagging the neuronal +entrainment to beat and meter. J Neurosci, 31(28), 10234-10240. +https://doi.org/10.1523/JNEUROSCI.0411-11.2011 +Oppenheim, A. V., Buck, J. R., & Schafer, R. W. (2001). Discrete-time signal +processing. Vol. 2. Upper Saddle River, NJ: Prentice Hall. +Picton, T. W., John, M. S., Dimitrijevic, A., & Purcell, D. (2003). Human auditory +steady-state responses: Respuestas auditivas de estado estable en humanos. +International Journal of Audiology, 42(4), 177-219. + +https://doi.org/10.3109/14992020309101316 +Poor, H. V. (1998). An introduction to signal detection and estimation. Springer +Science & Business Media. +Regan, D. (1977). Steady-state evoked potentials. Journal of the Optical Society of +America, 67(11), 1475-1489. https://doi.org/10.1364/JOSA.67.001475 +Ross, B., Borgmann, C., Draganova, R., Roberts, L., & Pantev, C. (2000). A high- +precision magnetoencephalographic study of human auditory steady-state +responses to amplitude-modulated tones. The Journal of the Acoustical Society +of America, 108, 679-691. https://doi.org/10.1121/1.429600 +Wang, Y., Ding, N., Ahmar, N., Xiang, J., Poeppel, D., & Simon, J. Z. (2012). +Sensitivity to temporal modulation rate and spectral bandwidth in the human +auditory system: MEG evidence. J Neurophysiol, 107(8), 2033-2041. +https://doi.org/10.1152/jn.00310.2011 +Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical +experience shapes human brainstem encoding of linguistic pitch patterns. +Nature Neuroscience, 10(4), 420-422. https://doi.org/10.1038/nn1872 +Yost, W. A. (1996). Pitch of iterated rippled noise. The Journal of the Acoustical +Society of America, 100(1), 511-518. https://doi.org/10.1121/1.415873 +Zhou, H., Melloni, L., Poeppel, D., & Ding, N. (2016). Interpretations of Frequency +Domain Analyses of Neural Entrainment: Periodicity, Fundamental Frequency, +and Harmonics. Front Hum Neurosci, 10, 274. +https://doi.org/10.3389/fnhum.2016.00274 + + diff --git a/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f6226c0e893e66a00ddfc10b92cab5bb04041446 --- /dev/null +++ b/zdAzT4oBgHgl3EQfC_pK/content/tmp_files/load_file.txt @@ -0,0 +1,666 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf,len=665 +page_content='Interpretation and Analysis of the Steady-State Neural Response to Complex Sequential Structures: a Methodological Note Nai Ding College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China Abstract Frequency tagging is a powerful approach to investigate the neural processing of sensory features, and is recently adapted to study the neural correlates of superordinate structures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', chunks, in complex sequences such as speech and music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The nesting of sequence structures, the necessity to control the periodicity in sensory features, and the low-frequency nature of sequence structures pose new challenges for data analysis and interpretation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Here, I discuss how to interpret the frequency of a sequential structure, and factors that need to be considered when analyzing the periodicity in a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Finally, a safe procedure is recommended for the analysis of frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Introduction Frequency tagging is a power technique to extract the neural response tracking a stimulus feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In general, in the frequency tagging paradigm, a target stimulus feature is periodically modulated at a frequency f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Consequently, the neural response that dynamically tracks the stimulus feature also fluctuates at frequency f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The f-Hz frequency tagged response is often extracted using the Discrete Fourier Transform (DFT) or wavelet transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Frequency-tagging is a powerful paradigm for electroencephalography (EEG) and magnetoencephalography (MEG) studies since it can extract any neural response that follows the f-Hz change in the stimulus, regardless of the latency or waveform of the response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The paradigm has been widely applied to study visual (Norcia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Regan, 1977) and auditory (Galambos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 1981;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Picton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2003) processing: The frequency-tagged response to periodic changes in visual features, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', luminance, is referred to as the Steady State Visual Evoked Potentials (SSVEP), while the frequency-tagged response to periodic changes in auditory features, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', intensity, is referred to as the auditory Steady State Response (aSSR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' These responses are widely applied to study the basic properties of sensory encoding (Herrmann, 2001;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Ross et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Wong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2007) and cognitive control (Andersen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Elhilali et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Gao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' More recently, the frequency-tagging paradigm has been applied to study the neural processing of superordinate structures in complex sequences, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', speech and music: The hypothesis in these studies is that a mentally constructed superordinate sequence structure, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', a sentence, is neurally represented by a response whose duration matches the duration of the structure in the stimulus (Buiatti et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' On the one hand, frequency tagging provides a powerful paradigm to investigate the neural processing of a chunk in contrast to a brief stimulus event and has stimulates a large number of studies (Batterink & Paller, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Benitez-Burraco & Murphy, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Choi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Glushko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Henin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Kaufeld et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Kazanina & Tavano, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Keitel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Lo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Makov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Meng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Meyer, 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' On the other hand, the complexity of the sequence processing problem has also caused more challenges to the analysis and interpretation of the frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' First, in traditional frequency tagging studies, each stimulus feature of interest is tagged at a distinct frequency, while the structures in a complex sequence are often nested so that different levels of structures cannot be tagged at unrelated arbitrary frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For example, in a sentence “the cute boy smiled”, the first three words construct a noun phrase based on syntax.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nevertheless, the 3-word noun phrase and the 4-word sentence are nested so that they cannot be frequency tagged at unrelated frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The nesting between structures lead to a dissociation between structure duration and structure repetition period, which is discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Second, traditional frequency tagging studies explicitly create periodic changes in a stimulus feature while the studies on sequence structures sometimes want to avoid such periodic changes in basic stimulus features to isolate the neural response generated by internal mental processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' What is a neural response generated by internal mental processes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For example, a metrical structure may be imagined when listening to an isochronous beat sequence, and the neural response at the imagined meter rate can reflect internally driven processes (Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Similarly, when a sequence of words is grouped into sentences based on syntactic rules, the neural response at the sentence rate can reflect higher-level sentence processing (Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In these situations, however, if a basic sensory feature has the same periodicity as the imagined meter or syntactically constructed sentence, it is ambiguous whether the neural response tracks the sensory feature or the sequence structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Therefore, it is often necessary to check the periodicity in stimulus features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Cautions, however, are needed since some types of periodicities are not captured by the Fourier transform, which is discussed in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Third, the analysis of responses to frequency-tagged sequence structures is sometimes prone to artifacts that seldomly affect the analysis of traditional frequency tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For sequence structures often correspond to a very low frequency, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', < 3 Hz, and such a low-frequency may be contaminated by overlapping in the analysis epochs (Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Section 3 illustrates why such artifacts may be generated and discusses potential guidelines for appropriate analysis of the frequency tagged responses, including the selection of analysis duration and whether a smoothing window should be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' This article discusses common technical issues, instead of the analysis of a specific experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' However, to facilitate interpretation, a hypothetical experiment is provided in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 1A, but the conclusions are not limited to this example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' On the other hand, the target audience is experimentalists instead of engineers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Therefore, article attempts to explain ideas using illustrations and skips mathematical derivations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The mathematical basis of the DFT can be found in classic textbooks such as Oppenheim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' What is not reflected by the Fourier transform 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1 Frequency may not reflect the time constant or signal duration For frequency-domain analysis, a central concept is frequency, which corresponds to the period of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The period of a signal, however, does not necessarily coincide with other time constants of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For example, an exponential signal e-t/τ has a time constant τ, but the signal is aperiodic and τ is not a period of the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Even for a periodic signal, its period may dissociate from the time constant or duration of the waveform within a period, and some examples are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 1B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In these examples, the signals have a period of 1 s, and the waveform within a period is shown in the left panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The temporal parameters of the signal, including the time constant of an exponential function, duration of a sawtooth signal, and frequency of a single-cycle sinusoid, affect the shape of the Fourier spectrum but generally do not lead to any spectral peak corresponding to these parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Instead, since the signal repeats at a rate of 1 Hz, the spectrum shows peaks at 1 Hz and its harmonically related frequencies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2 Hz, 3 Hz, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' When the period of the signal changes, however, the spectral peaks shift accordingly, even if the waveform within a cycle remains unchanged (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 1C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' See Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2016) for more illustrations about how the spectrum is influenced by the signal repetition rate and the waveform within a period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Peaks in the spectrum reflects the periodicity of a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A) A hypothetical experiment condition, in which a noun phrase (NP) is embedded in a sentence (S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The duration of the NP is either 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='75 s or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 s, and a neural response is hypothesized to be modulated the duration of the NP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' B) Signals that repeat every 1 s and the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The left panel shows the waveform within a period, and the black and blue curves have different time constants, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='75 s and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 s respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The right panel shows the spectrum that is the magnitude of the DFT transform of 10 periods of the corresponding signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The time constant and the corresponding frequency are shown by the vertical dotted lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The spectrum has peaks at 1 Hz, 1 over the signal period, and harmonically related frequencies, regardless of the time constant of the signal within a period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' C) Signals that are constructed by the same sawtooth waveform but have different repetition rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The spectral peaks always reflect the repetition rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' NP(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 s) S (1 s) S(1 s) the cute boy smiled my friend likes tea 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='75 0////1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2 Frequency may not reflect the rate of change Suppose a signal changes every T s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Intuitively, its Fourier spectrum should peak at 1/T Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' This intuition, however, is not always true and an example is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2, in which the spectrum shows troughs at 1/T Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' When the signal is employed to modulate the gain of a 4-Hz sinusoid, the modulated sinusoid does not show any power at 1/T Hz either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The purpose of these examples is to show that the Fourier transform may be blind to some rhythms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Why does the signal lack power at 1/T Hz?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In the Fourier transform, the power at f is determined by the dot product between the signal and sinusoids at frequency f (including both sine and cosine).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The signals in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2 contain no fluctuations within each T s and therefore the signal has no correlation with sinusoids at 1/T Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 3 illustrates the dot product between signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The change rate of a signal can correspond to troughs in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The upper panel shows a signal that changes once every 1 s, and the lower panel is a 4-Hz sinusoid that is amplitude modulated by the signal on the upper panel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In the spectrum, troughs are observed at 1 Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Illustration of the dot product between signals, which is the basis of the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A) A 3-Hz sinusoid, which is employed to calculated the DFT coefficient at 3 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' BC) Signals to analyze and their point-by-point product with the reference signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The top signal is similar to the signal in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2, while the other 3 signals are sinusoids with the frequency shown by the number in panel B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The sum of the product signal, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', the dot product between the two signals, is shown by the number in red in panel C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' DE) Examples of signals that have nonzero dot product with the reference signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Effects of the neural response analysis method 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1 Overlapping epochs can introduce artifacts A rhythm can be created based on an arbitrary signal by adding delayed versions of the signal to itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' An illustration is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 4A, in which the signal to analyze only consists of a pulse at 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='8 s and is 0 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' When the signal is chunked into 5- s epochs with 4-s overlap, however, the averaged epoch clearly becomes periodic and the period is the same as the distance between adjacent epochs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 1 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Another 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='7 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 4B, in which a white noise is chunked into 5 s epochs in the same way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The spectrum averaged over 100 epochs clearly shows a peak at 1 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In fact, in hearing research, this method has been employed to generate pitch perception based on, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', white noise (Yost, 1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' If inappropriate data epoching can introduce artifacts, why not directly applying the Fourier transform to the unepoched data?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A direct Fourier transform to the unepoched data can indeed yield a high-frequency-resolution spectrum of the response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nevertheless, in real EEG/MEG recordings, strong artifacts caused by, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', head movements or hardware glitches, can barely be avoided during a long recording, and excluding recordings with large artifacts from further analyses is a common practice in EEG/MEG analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' It is nonoptimal, however, to throw away a long recording based on a few sparsely located artifacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Therefore, segmenting a long recording into shorter epochs and only removing epochs with obvious artifacts is a common strategy 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2 Analysis window determines the width of spectral peaks Suppose a frequency-tagged neural response has a period of T s, and D seconds of recording is transformed into the frequency domain using the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The DFT spectrum consists of coefficients corresponding to discrete frequencies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 1/D Hz, 2/D Hz, 3/D Hz, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' If D is a multiple of T, the frequency-tagged response is resolved in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In other words, if D = kT, where k is an integer, the kth DFT coefficient corresponds to 1/T Hz, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', the target frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In this case, the response spectrum only has power at 1/T Hz and harmonically related frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' An example is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 5A (upper panel), where T is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 s, D is 5 s, and the neural response is exactly a sinusoid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The response spectrum has a sharp peak at 4 Hz and the power in adjacent frequency bins is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The DFT coefficient not at 4 Hz is zero since the dot product between any two D-s long sinusoids at frequencies resolved by the DFT is zero (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 3BC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' When D is not a multiple of T, however, the DFT spectrum does not have a frequency bin corresponding to 1/T Hz and the power of the signal spreads to many frequency bins near 1/T Hz, a phenomenon known as frequency leakage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' An example is shown Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 5B (upper panel), where T is still 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 s but D is 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Overlapping epochs can lead to spurious peaks in the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A) A nonperiodic signal that is composed of a single pulse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' B) The signal in A is segmented into 5-s epochs that have 4-s overlap with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' C) The average of the epochs in B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' D) The same epoching process is applied to white noise and the resulting waveform is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' E) The spectrum of the signal in D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' To obtain a robust result, twenty independent white noise is generated and processed the same way and the spectra are averaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5A common strategy to alleviate frequency leakage is to multiply a smoothing window to the signal before the Fourier transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The spectra of the windowed signals are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 5 (lower panel).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' With the smoothing window, the signal duration no longer strongly affects the shape of the spectrum, but the spectrum always has nonzero power in frequency bins near the target frequency, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2 Hz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The main difference between the methods in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 5 is whether all the power of a sinusoid concentrates in a single frequency bin or spreads to several bins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' It is not further illustrated but the conclusions apply to other variations in the analysis method, such as padding zeros to the signal or using the wavelet transform instead Fourier transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Shall we care about whether the signal power concentrates in a single frequency bin or not?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The answer is yes in conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For example, a convenient approach to test the statistical significance of a frequency tagged response is to compare the power at the target frequency with the power in adjacent frequency bins (Benjamin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nozaradan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The statistical power of this approach is clearly compromised when the power in adjacent frequency bins is elevated from baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Even when the statistical significance of the frequency-tagged response is tested using other methods, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', in comparison with a control condition that does not have the frequency-tagged response (Andersen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', 2008), the statistical power of the test can benefit from concentrating all power of the frequency-tagged response into a single frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' More generally, when the periodicity of a signal is unknown and needs to be determined using the Fourier analysis, a smoothing window often helps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nevertheless, in the frequency-tagging paradigm, the target frequency is known and therefore a smoothing window is not necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' In other words, in the frequency-tagging approach, the purpose of data analysis is not to estimate the periodicity of a response but to detect the presence of a response with a known frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Based on the signal detection theory (Poor, 1998), optimal detection of a sinusoid generally involves calculating the dot product between the recorded signal and the target signal, which can be viewed a sinusoid in the frequency tagging paradigm, and such dot product can be conveniently calculated using the DFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Frequency leakage and windowing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The signal to analyze is a 2-Hz sinusoid and the duration of the signal is 5 s in panel A and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1 s in panel B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The upper panel is the DFT of the signal and the lower panel is the DFT of the signal smoothed by a Hanning window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Summary First, in the frequency tagging paradigm, the target frequency is the frequency at which a stimulus feature or sequence structure repeats, which in general does not relate to how long the feature or structure lasts or how fast it varies within each period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Second, the Fourier transform does not provide a one-size-fits-all solution to extract all periodicities in a signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' On the stimulus side, cautions are needed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', when making sure that a stimulus does not contain any conceivable periodicity at a target frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' On the response side, more advanced feature extraction methods may be necessary to identify a frequency-tagged response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' For example, for the 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='5signals in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2, taking the absolute value of the first-order derivative of the signal can reflect the 1-Hz periodicity in the signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Finally, I recommend the following as a relatively safe procedure to analyze frequency-tagged responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (1) The response being analyzed should contain exactly an integer number of periods of the frequency-tagged response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' More specifically, if the response sampling rate is F and the response is frequency tagged at f, the number of samples per cycle of the response is F/f, which does not need to be an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nevertheless, if k cycles are included in the analysis window, the total length of the analysis window, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', kF/f, should be an integer and k is also an integer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2) When the stimulus lasts for a very long duration (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', several minutes), and the response recorded throughout the presentation of the stimulus can be directly transform into the frequency domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Alternatively, it can be segmented into shorter epochs, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', to remove epochs with large artifacts, and averaged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The epochs, however, should not overlap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (3) No smoothing window is necessary when performing the Fourier analysis, when the target frequency is known and the analysis window contains an integer number of cycles of the target response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Acknowledgement I thank Wenhui Sun for helping formatting the bibliography.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' This work was supported by the National Natural Science Foundation of China (32222035) and Key R & D Program of Zhejiang (2022C03011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Reference Andersen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Hillyard, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Muller, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Attention facilitates multiple stimulus features in parallel in human visual cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Curr Biol, 18(13), 1006-1009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='cub.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='030 Batterink, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Paller, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Statistical learning of speech regularities can occur outside the focus of attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Cortex, 115, 56-71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='013 Benitez-Burraco, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Murphy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Why Brain Oscillations Are Improving Our Understanding of Language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Front Behav Neurosci, 13, 190.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='3389/fnbeh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='00190 Benjamin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Dehaene-Lambertz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Fló, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Remarks on the analysis of steady-state responses: Spurious artifacts introduced by overlapping epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Cortex, 142.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='023 Buiatti, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Peña, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Dehaene-Lambertz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Investigating the neural correlates of continuous speech computation with frequency-tagged neuroelectric responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Neuroimage, 44(2), 509-519.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='neuroimage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='015 Choi, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Batterink, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Black, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Paller, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Werker, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Preverbal Infants Discover Statistical Word Patterns at Similar Rates as Adults: Evidence From Neural Entrainment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Psychological Science, 31(9), 1161-1173.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1177/0956797620933237 Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Tian, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Cortical tracking of hierarchical linguistic structures in connected speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nat Neurosci, 19(1), 158-164.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1038/nn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='4186 Elhilali, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Xiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Shamma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Interaction between attention and bottom-up saliency mediates the representation of foreground and background in an auditory scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' PLoS Biol, 7(6), e1000129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='pbio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1000129 Galambos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Makeig, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Talmachoff, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (1981).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A 40-Hz auditory potential recorded from the human scalp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 78(4), 2643-2647.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1073/pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2643 Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Gao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Interface, interaction, and intelligence in generalized brain–' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='computer interfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Trends in Cognitive Sciences, 25(8), 671-684.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='tics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='003 Glushko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Steinhauer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Overt and implicit prosody contribute to neurophysiological responses previously attributed to grammatical processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Scientific Reports, 12(1), 14759.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1038/s41598-022-18162-3 Henin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Turk-Browne, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Friedman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Liu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Dugan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Flinker, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Doyle, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Devinsky, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Learning hierarchical sequence representations across human cortex and hippocampus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Science Advances, 7(8), eabc4530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1126/sciadv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='abc4530 Herrmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Human EEG responses to 1-100 Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Exp Brain Res, 137(3-4), 346-353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1007/s002210100682 Kaufeld, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Bosker, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ten Oever, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Alday, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Meyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Martin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Linguistic Structure and Meaning Organize Neural Oscillations into a Content-Specific Hierarchy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J Neurosci, 40(49), 9467-9475.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0302-20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2020 Kazanina, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Tavano, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' What neural oscillations can and cannot do for syntactic structure building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nature Reviews Neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1038/s41583-022-00659-5 Keitel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Gross, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Kayser, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' PLoS Biol, 16(3), e2004473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='pbio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2004473 Lo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Tung, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Brennan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Hierarchy, Not Lexical Regularity, Modulates Low-Frequency Neural Synchrony During Language Comprehension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Neurobiology of Language, 3(4), 538-555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1162/nol_a_00077 Lu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Sheng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Neural representations of imagined speech revealed by frequency-tagged magnetoencephalography responses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Neuroimage, 229, 117724.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='neuroimage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='117724 Makov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Sharon, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ben-Shachar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Nir, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Zion Golumbic, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J Neurosci, 37(32), 7772-7781.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0168-17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2017 Meng, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Hegner, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Giblin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', McMahon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Johnson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and Degraded Speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Cereb Cortex, 31(1), 591-602.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1093/cercor/bhaa245 Meyer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Eur J Neurosci, 48(7), 2609-2621.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1111/ejn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='13748 Norcia, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Appelbaum, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ales, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Cottereau, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Rossion, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The steady-state visual evoked potential in vision research: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Journal of Vision, 15(6), 4-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1167/15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='4 Nozaradan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Peretz, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Missal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Mouraux, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Tagging the neuronal entrainment to beat and meter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J Neurosci, 31(28), 10234-10240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1523/JNEUROSCI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='0411-11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2011 Oppenheim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Buck, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Schafer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Discrete-time signal processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Upper Saddle River, NJ: Prentice Hall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Picton, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', John, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Dimitrijevic, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Purcell, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Human auditory steady-state responses: Respuestas auditivas de estado estable en humanos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' International Journal of Audiology, 42(4), 177-219.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='3109/14992020309101316 Poor, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' An introduction to signal detection and estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Springer Science & Business Media.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Regan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Steady-state evoked potentials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Journal of the Optical Society of America, 67(11), 1475-1489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1364/JOSA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='001475 Ross, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Borgmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Draganova, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Roberts, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Pantev, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A high- precision magnetoencephalographic study of human auditory steady-state responses to amplitude-modulated tones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The Journal of the Acoustical Society of America, 108, 679-691.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1121/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='429600 Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Ahmar, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Xiang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Simon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Sensitivity to temporal modulation rate and spectral bandwidth in the human auditory system: MEG evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' J Neurophysiol, 107(8), 2033-2041.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1152/jn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='00310.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2011 Wong, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Skoe, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Russo, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Dees, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Kraus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Musical experience shapes human brainstem encoding of linguistic pitch patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Nature Neuroscience, 10(4), 420-422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1038/nn1872 Yost, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Pitch of iterated rippled noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' The Journal of the Acoustical Society of America, 100(1), 511-518.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='1121/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='415873 Zhou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Melloni, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', Poeppel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=', & Ding, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Interpretations of Frequency Domain Analyses of Neural Entrainment: Periodicity, Fundamental Frequency, and Harmonics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' Front Hum Neurosci, 10, 274.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content=' https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='3389/fnhum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'} +page_content='00274' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/zdAzT4oBgHgl3EQfC_pK/content/2301.00968v1.pdf'}